Searching \ for '[EE]:: Hard drive reliability' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/mems.htm?key=hard+drive
Search entire site for: ': Hard drive reliability'.

Exact match. Not showing close matches.
PICList Thread
'[EE]:: Hard drive reliability'
2014\04\26@103227 by RussellMc

face picon face
Several years ago Google did a survey of hard-drive reliability of their
drives and concluded that one brand was substantially more unreliable than
others. They did not disclose the brands in their results.
I have about 30 TB of 'online' storage (internal drives + 15 external
USB2/USB3 connected drives in a mix of 1, 1.5, 2 & 3 TB capacities). Based
on my experiences I decided that Seagate were the bad performers. I was
right.

Online storage provider Backblaze, with about 20,000 TB of storage, have
carried out a reliability review, and unlike Google, have both published
the results in some detail and replied to questions.

Reading the results is liable to be interesting and informative for many,
but I've provided a quick summary and comments below:

Sleeping ... More anon maybe.

*Overall:*

    Best: Hitachi  - but much dearer than WD or Seagate (2X + in NZ)
      OK: WD
   Poor: Seagate


Utterly terrible:

     Seagate Barracuda Green 1.5 TB.

Risky:

   WD GREEN 3TB, Seagate LP 2TB.
   Workhorse: Seagate 1.5 TB


WD bought Hitachi HDD co in 2012. Result on WD & Hitachi results tbd.

______________________

The Backblaze data appears on many sites. Data available varies and looking
at a number of the reports is useful.
Some sites may just regurgitate the BB data - others may have more from
other channels. Below I have not tried too hard to determine quality and
overalp with BB report.


Backblaze's own page. Excellent.


http://blog.backblaze.com/2014/01/21/what-hard-drive-should-i-buy/

     Death of a hard drive

          http://blog.backblaze.com/2013/10/28/alas-poor-stephen-is-dead/

    How long ... , Nov 13


http://blog.backblaze.com/2013/11/12/how-long-do-disk-drives-last/

  Good


http://bychaw.blogspot.co.nz/2011/04/what-is-inside-wd-elements-hard-drive.html


Good detail breakdowns.


http://www.gamersnexus.net/news/1293-backblaze-hard-drive-failure-rates

PCWorld - looks good


http://www.pcworld.com/article/2089464/three-year-27-000-drive-study-reveals-the-most-reliable-hard-drive-makers.html

ARSTECHNICA


http://arstechnica.com/information-technology/2014/01/putting-hard-drive-reliability-to-the-test-shows-not-all-disks-are-equal/


Internal HDD - NZ $
pricespy.co.nz/category.php?k=358&o=eg_1198&rev=1#prodlista
External
http://pricespy.co.nz/category.php?k=360&o=eg_1198&rev=1#prodlista

________________

*Not bad - just bad for us ...*

Some drives just don't work in the Backblaze environment. We have not
included them in this study. It wouldn't be fair to call a drive "bad" if
it's just not suited for the environment it's put into.

"The drives that just don't work in our environment are Western Digital
Green 3TB drives and Seagate LP (low power) 2TB drives," he wrote. "Both of
these drives start accumulating errors as soon as they are put into
production."

"We think this is related to vibration. These drives are designed to be
energy-efficient, and spin down aggressively when not in use," he added.
"In the Backblaze environment, they spin down frequently, and then spin
right back up. We think that this causes a lot of wear on the drive."


Read more: Which brand of hard disk is most reliable? | News | PC
Pro<http://www.pcpro.co.uk/news/386647/which-brand-of-hard-disk-is-most-reliable#ixzz3007eVXGD>

http://www.pcpro.co.uk/news/386647/which-brand-of-hard-disk-is-most-reliable#ixzz3007eVXGD

Number of Hard Drives by Model at Backblaze Model SizeNumber
of DrivesAverage
Age in
Years Annual
Failure
Rate Seagate Desktop HDD.15
(ST4000DM000)4.0TB5199 0.33.8% Hitachi GST Deskstar 7K2000
(HDS722020ALA330)2.0TB 47162.9 1.1%Hitachi GST Deskstar 5K3000
(HDS5C3030ALA630) 3.0TB4592 1.70.9% Seagate Barracuda
(ST3000DM001)3.0TB4252 1.49.8% Hitachi Deskstar 5K4000
(HDS5C4040ALE630)4.0TB 25870.8 1.5%Seagate Barracuda LP
(ST31500541AS) 1.5TB1929 3.89.9% Hitachi Deskstar 7K3000
(HDS723030ALA640)3.0TB 10272.10.9% Seagate Barracuda 7200
(ST31500341AS)1.5TB 5393.825.4% Western Digital Green
(WD10EADS)1.0TB 4744.43.6% Western Digital Red
(WD30EFRX)3.0TB 3460.53.2% Seagate Barracuda XT
(ST33000651AS)3.0TB 2932.07.3% Seagate Barracuda LP
(ST32000542AS)2.0TB 2882.07.2% Seagate Barracuda XT
(ST4000DX000)4.0TB 1790.7n/a Western Digital Green
(WD10EACS)1.0TB 845.0n/a Seagate Barracuda Green
(ST1500DL003)1.5TB 510.8120.0%
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\26@105759 by veegee

flavicon
face
They all fail. The only question is when.

I treat hard drives as disposable things that can spontaneously combust at
any time. For anything important, I use 3 drives in RAID 5 mode. I set up a
Linux server and use ext4 over mdadm RAID 5.

Seagate drives are cheap and quite fast for a large capacity. The other
brands may be more reliable on average, but they cost significantly more. I
don't need my drives to last 10 years because they'll be outdated by then.
If a drive fails, I replace it with a larger, newer one and rebuild the
RAID array. I can use the extra space for whatever I want; LVM and mdadm
make it very easy to manage volumes.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\26@112529 by John Guillory

picon face
Sadly I have to agree!  I had 1 seagate backup drive, died after about a year.  Darn thing is impossible to open up!  I wanted to change the hard drive out, but can't.  I had 2 hard drives in laptops (300gb western digital) and a third like it.  Only the third drive still runs.  At least the western digital drives where 3 years old.  Boy drives are really getting better!  I've got some full height 10meg hard drives still running back at my house in Louisiana.  Used to not have problems till after 10 years, now after a year your greatful.

--
KF5QEO
John Guillory
spam_OUTwestlakegeekTakeThisOuTspamyahoo.com
Cell: 601-754-9233
Pinger: 337-240-7890
Google Voice: 601-265-1307


{Quote hidden}

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\26@171829 by Robert Dvoracek

picon face
I used to use Seagate exclusively, but the newer drives are pants.  I was in warranty hell for about a year with drive after drive failing.  Finally I said enough and went out and bought a Western Digital and haven't had a problem since.  It used to be the other way around.  The old Caviar drives would drop like flies.  The 120MB Seagate drive from my first computer is around here somewhere and probably still works.

Sent from my iPad

On Apr 26, 2014, at 10:43 AM, "RussellMc" <apptechnzspamKILLspamgmail.com> wrote:

{Quote hidden}

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\27@040403 by Peter Johansson

picon face
On Sat, Apr 26, 2014 at 10:31 AM, RussellMc <.....apptechnzKILLspamspam.....gmail.com> wrote:

> Several years ago Google did a survey of hard-drive reliability of their
> drives and concluded that one brand was substantially more unreliable than
> others. They did not disclose the brands in their results.
> I have about 30 TB of 'online' storage (internal drives + 15 external
> USB2/USB3 connected drives in a mix of 1, 1.5, 2 & 3 TB capacities). Based
> on my experiences I decided that Seagate were the bad performers. I was
> right.

Historically the simple trick to finding reliable drives is to look
for those with a 5 year warranty.  It was fairly easy to find
consumer-grade drives with 5-year warranties as recently as a few
years ago.  I had very good results with the Seagate Barracudas for
many years, but as soon as they went from five to three year
warranties I switched to WD -- and in retrospect I am very glad I did.

I built a rather large server not too long before the flood so I have
been out of touch with the market, but with HD prices finally coming
down I have been looking again.   It seems as if the only consumer
grade drives with 5-year warranties any more are the WD Black series
and these are practically as expensive as enterprise grade drives.
All the rest have only two or three year warranties.

-p.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\27@043957 by RussellMc

face picon face
On 27 April 2014 02:57, veegee <EraseMEveegeespam_OUTspamTakeThisOuTveegee.org> wrote:

> They all fail. The only question is when.
>
>
In my (long ago) prior corporate lifetime I used to tell users that all
hard drives fail - somewhere between 10 years and 10 minutes after you
first use them, and that you can't be sure which it will be.

Quick look suggests that the oldest of the 15 USB connected external drives
on this system is a 5 year old Seagate 1 TB. So far, as far as I know, it
has behaved well. Time for some overall investigation I think.


> I treat hard drives as disposable things that can spontaneously combust at
> any time.


He's reading my mind :-).
I used occasionally to use an analogy for fun that gave a good enough idea
of what to expect.
This has been embellished somewhat as I go along :-).
"Imagine a castle with a large banqueting hall. On one side there is an
immense open fire that is stoked night and day.
Occasionally the fire sends out showers of sparks of greater or lesser
size. Occasionally sparks drif a long way. Sometimes a long long long way.
Who can say how far they will drift today.
On the far side of the large hall is a great sheet of super-fine vellum
stretched paper stretched floor to ceiling on a frame. Scribes use ladders
to ascend the wall and record the annals of the kingdom on the vellum.
Every so often sparks from the fire reach the vellum sheet. Sometimes they
produce no result. Sometimes they may a mark so minor as to be wholly
irrelevant when reading back the related annals - the eye corrects for the
mark unawares. Occasionally a small flareup and burn occurs and a word or
sentence or a few scattered words may be lost. These may be able to be
corrected. If a burn is too large the text may be rewritten elsewhere.
Sometimes a large hole may occur. Very occasionally the whole sheet she
goeth up in a flaming conflagration, senor. So, on another wall there is a
duplicate sheet that ...
Vellum sheets have been known to last for over a decade. Years is usual.
The late Cedric the sorrowful copied out a whole new sheet after a
conflagration and it was gone by lunchtime. Them's the breaks.

I've seen a new HP hard drive last half a day. Sure, it must have had
issues that should have been detectable. But data could still be lost on it..
I've heard a new disk last under a week. Very loud while it lasted.

For anything important, I use 3 drives in RAID 5 mode. I set up a
> Linux server and use ext4 over mdadm RAID 5.
>

I don't use RAID - and it may well be a better idea that what I do use.
I cross drive backup main day to day directories and keep 2 or 3 or
sometimes more copies of  associated groups of photo files, which are the
main space consumer.
Single "events" may have a master DVD but I'm tending to use USB memory
sticks more and DVD capacity is too limiting. I've never got to using
BlueRay.

>
> Seagate drives are cheap and quite fast for a large capacity. The other
> brands may be more reliable on average, but they cost significantly more.


Seagate and WD are comparable in price if bought when on special from
lowest price sellers - which is the way I buy mine as price difference from
walk-in-and-buy can be vast.
My informal target is $NZ50/Terrabyte - best prices lately are somewhat up
on that. 3TB are usually best MB/$ at present and 4TB will soon be similar.

My experience and the report I cited suggest that Seagate are typically
MUCH less reliable than WD, but also model dependant.

I
>
> don't need my drives to last 10 years because they'll be outdated by then..
> If a drive fails, I replace it with a larger, newer one and rebuild the
> RAID array. I can use the extra space for whatever I want; LVM and mdadm
> make it very easy to manage volumes.
>

My smallest external drive is 1 TB and largest is 3TB. 4TB soon.
A failed 1TB is about 3.5% of total capacity and the duplicate of what it
held that needed recopying would be copied from backup to a new 3TB.

I dislike using systems that span volumes in ways that I do not have
control and awareness of. That's just me. Others are happier to trust the
machine to do a good job and allow it to make decisions which are somewhat
opaque at the lower levels.


        Russell
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\27@054554 by Bob Axtell

face picon face
> Historically the simple trick to finding reliable drives is to look
> for those with a 5 year warranty.  It was fairly easy to find
> consumer-grade drives with 5-year warranties as recently as a few
> years ago.  I had very good results with the Seagate Barracudas for
> many years, but as soon as they went from five to three year
> warranties I switched to WD -- and in retrospect I am very glad I did.
>
> I built a rather large server not too long before the flood so I have
> been out of touch with the market, but with HD prices finally coming
> down I have been looking again.   It seems as if the only consumer
> grade drives with 5-year warranties any more are the WD Black series
> and these are practically as expensive as enterprise grade drives.
> All the rest have only two or three year warranties.
>
There is a reason WD Black drives are so expensive. They last.

--Bob A
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\27@062242 by Bob Axtell

face picon face

On 4/27/2014 1:39 AM, RussellMc wrote:
{Quote hidden}

Everybody has their own approach to good data protection,  so I'll share mine.

I always assume that my HD is always one spin away from disaster. Some years ago,
I discovered Truecrypt. Truecrypt is a truly wonderful FREE encryption/decryption-on-the-fly
program that NEVER gets scrambled with a power outage. So I keep my two clients'
data on a single, quality 16mb flash drive, in a separate "container". When I work on either
data, I use that flash drive. I carry it securely around my neck when not working, and I drop a
copy of the container into some HD every night.

If I LOSE the flash drive, the data is safe because nobody can read the container. If the HD
gets corrupted, I re-install MPLAB then use the flash drive as before. NOTHING ever gets lost.
I install all transactions on the flashdrive, all databases (pcb layout files, Orcad Capture files,
component PDFs, etc, its all there.)

--Bob A
--

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\27@173447 by Tamas Rudnai

face picon face
Right, so now we so far discussed about different storage configurations, backup strategies and even technology advances, like flash driver vs magnetic disks. It seems to me though that every one is more afraid of physical damages or physical data loss rather than software error or cyber criminalism..

My problem with USB sticks is that when I forget to unmount (safe-remove) the stick, the file system may be damaged causing lost data. And this is nothing to do with the hardware but the software design - kinda understandable that we want to keep everything in cache until we remove the disk and it is absolutely necessary to flush data out, but it is highly against consumer market. I keep forgetting that, not to mention power outages or other undesirable moments, like your kids removing it while you do not pay attention etc.

The other problem could be cyber threat - Ransomware, for example the infamous Cryptolocker is a good example when the attacker encrypts our data and asks money to decrypt it. Not to mention to APTs (or sometimes referred as targeted attacks). This case the damage the attacker aims either data theft or sabotage. Obviously data theft cannot be prevented by backup strategies, however, sabotage can be handled - recovery is much more easy and the damage can be minimized.

Nowadays I tend to lean towards cloud backups to avoid such things as well as prevent data loss because of natural disasters (when both live data and backup is destroyed because of fire or other cause). What is your opinion about that? Shall we trust more on Microsoft, Apple or Google to store our data or better to keep it in our premises?

Tamas

{Quote hidden}

--
http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\28@004121 by veegee

flavicon
face
On Sun, Apr 27, 2014 at 5:34 PM, Tamas Rudnai <RemoveMEtamas.rudnaiTakeThisOuTspamgmail.com>wrote:

> Nowadays I tend to lean towards cloud backups to avoid such things as well
> as prevent data loss because of natural disasters (when both live data and
> backup is destroyed because of fire or other cause). What is your opinion
> about that? Shall we trust more on Microsoft, Apple or Google to store our
> data or better to keep it in our premises?
>

You can trust them. You can trust them to scan and inspect your data and
freely data mine from it. Best idea would be to 7zip, then encrypt, then
upload.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\28@004658 by veegee

flavicon
face
On Sun, Apr 27, 2014 at 4:39 AM, RussellMc <spamBeGoneapptechnzspamBeGonespamgmail.com> wrote:

> I dislike using systems that span volumes in ways that I do not have
> control and awareness of. That's just me. Others are happier to trust the
> machine to do a good job and allow it to make decisions which are somewhat
> opaque at the lower levels.
>

I would say mdadm (software RAID) is just as good or better than a hardware
RAID card. LVM is wonderful and as long as you make the system yourself,
you'll know exactly how your volumes are configured. I personally drew a
map and taped it to the servers when I had more than a few to look after.

The new BTRFS filesystem doesn't even need mdadm since it manages
everything (redundancy, copy on write, etc.) itself. Still new, so I prefer
to stick with mdadm and LVM until BTRFS is fully stable and production
tested.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\28@081557 by Martin K

face
flavicon
face

On 4/28/2014 12:46 AM, veegee wrote:
>
> The new BTRFS filesystem doesn't even need mdadm since it manages
> everything (redundancy, copy on write, etc.) itself. Still new, so I prefer
> to stick with mdadm and LVM until BTRFS is fully stable and production
> tested.

ZFS is already production stable on FreeBSD.
Russell:
You can use FreeNAS with a pile of standard hard drives and hardware to create a storage server that doesn't require detailed knowledge of specific concepts. It still requires trusting something though.
-
Martin K

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\28@082434 by John J. McDonough

flavicon
face
On Mon, 2014-04-28 at 00:40 -0400, veegee wrote:
> On Sun, Apr 27, 2014 at 5:34 PM, Tamas Rudnai <TakeThisOuTtamas.rudnaiEraseMEspamspam_OUTgmail.com>wrote:
>
> > Nowadays I tend to lean towards cloud backups to avoid such things as well
> > as prevent data loss because of natural disasters (when both live data and
> > backup is destroyed because of fire or other cause). What is your opinion
> > about that? Shall we trust more on Microsoft, Apple or Google to store our
> > data or better to keep it in our premises?
> >
>
> You can trust them. You can trust them to scan and inspect your data and
> freely data mine from it. Best idea would be to 7zip, then encrypt, then
> upload.

Certainly anything in the cloud should be encrypted, although I would
avoid proprietary compression/encryption schemes since these things tend
to disappear.  If you stick with popular, FOSS tools you can count on
years or decades of warning before they disappear.

But keep in mind that web providers, even very large ones, tend to
disappear without warning.  The cloud can be a useful part of your
backup strategy, but it shouldn't be your only backup strategy.

--McD


-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\28@170435 by Robert Dvoracek

picon face
I use par2 at 10% redundancy on important files.

Sent from my iPad

On Apr 28, 2014, at 3:37 PM, "YES NOPE9" <RemoveMEyesspamTakeThisOuTnope9.com> wrote:

{Quote hidden}

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\28@182330 by Peter Johansson

picon face
On Mon, Apr 28, 2014 at 3:30 PM, YES NOPE9 <yesEraseMEspam.....nope9.com> wrote:

> My question has to do with preventing slow deterioration of the data stored.  I apologize if this has been discussed and I missed it.
>
> There are many ways to keep multiple copies of important data.  How does one insure that the data has not been corrupted and you are merely continuing to backup corrupt data.  This may be data you do not look at for years.  Is there a file system that manages data integrity with some form of checksum ?  ( I mean checksum in the generic sense which could include polynomials , etc. )

The simplest method would be to generate MD5 or SHA hashes on all your
files and verify them on a regular basis.

I do this for my files and I have never found any corruption of media
("bit rot") but I did have an interesting experience moving the data
from a bunch of bare IDE disks onto a NAS a built a while back.
Because I had no available IDE ports on the NAS, I was using IDE-USB
dongles to copy data onto the server.  I was getting quite a number of
hash fails on the target and it took me *quite* a while to track this
down to flaky firmware in my USB-IDE dongle.  I wound up putting the
drive in a desktop and copying the files over the network and then
everything worked fine.

-p.

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\28@184945 by veegee

flavicon
face
On Mon, Apr 28, 2014 at 3:30 PM, YES NOPE9 <EraseMEyesspamnope9.com> wrote:

> My question has to do with preventing slow deterioration of the data
> stored.  I apologize if this has been discussed and I missed it.
>
> There are many ways to keep multiple copies of important data.  How does
> one insure that the data has not been corrupted and you are merely
> continuing to backup corrupt data.  This may be data you do not look at for
> years.  Is there a file system that manages data integrity with some form
> of checksum ?  ( I mean checksum in the generic sense which could include
> polynomials , etc. )
>
> I have wondered that about operating systems as well.  Is there a
> technique for detecting that operating system files , applications and
> daemons are not slowly corrupting.  I have a old MacIntosh that is now
> forgetting how to use DNS services after running for 2 hours.  Rebooting it
> makes it smart again.  One of these times it will not reboot.
>
>
Yes. All software RAID mechanisms (mdadm, BTRFS, ZFS) provide "scrubbing"
functionality. One normally runs this once a week and it makes sure that
all the data agrees with itself across all redundancy units in a set. It
fixes any corruptions it finds.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\28@193638 by Marcel Duchamp

picon face
On 4/28/2014 3:49 PM, veegee wrote:

> Yes. All software RAID mechanisms (mdadm, BTRFS, ZFS) provide "scrubbing"
> functionality. One normally runs this once a week and it makes sure that
> all the data agrees with itself across all redundancy units in a set. It
> fixes any corruptions it finds.
>

How long does scrubbing take on, say, a nearly full 1TB drive?
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\28@195139 by James Cameron

flavicon
face
On Mon, Apr 28, 2014 at 06:23:28PM -0400, Peter Johansson wrote:
> I do this for my files and I have never found any corruption of media
> ("bit rot") but I did have an interesting experience moving the data
> from a bunch of bare IDE disks onto a NAS a built a while back.
> Because I had no available IDE ports on the NAS, I was using IDE-USB
> dongles to copy data onto the server.  I was getting quite a number of
> hash fails on the target and it took me *quite* a while to track this
> down to flaky firmware in my USB-IDE dongle.  [...]

Yes, I've seen that too.  The USB IDE adapter was somewhat cheap
though, and the product didn't last long in the market.  I use
checksums to verify USB SATA adapters, combined with an eject and
power cycle to flush any adapter or drive caches.

-- James Cameron
http://quozl.linux.org.au/
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\28@225300 by Matt Callow

flavicon
face
On 29 April 2014 05:30, YES NOPE9 <RemoveMEyesEraseMEspamEraseMEnope9.com> wrote:

> My question has to do with preventing slow deterioration of the data
> stored.  I apologize if this has been discussed and I missed it.
>
> There are many ways to keep multiple copies of important data.  How does
> one insure that the data has not been corrupted and you are merely
> continuing to backup corrupt data.  This may be data you do not look at for
> years.  Is there a file system that manages data integrity with some form
> of checksum ?  ( I mean checksum in the generic sense which could include
> polynomials , etc. )
>
> ZFS does this

http://en.wikipedia.org/wiki/ZFS

Matt
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\29@112845 by veegee

flavicon
face
On Mon, Apr 28, 2014 at 7:36 PM, Marcel Duchamp <
RemoveMEmarcel.duchampspam_OUTspamKILLspamsbcglobal.net> wrote:

> > Yes. All software RAID mechanisms (mdadm, BTRFS, ZFS) provide "scrubbing"
> > functionality. One normally runs this once a week and it makes sure that
> > all the data agrees with itself across all redundancy units in a set. It
> > fixes any corruptions it finds.
>
> How long does scrubbing take on, say, a nearly full 1TB drive?
>

I haven't ever timed it since I just run it as a cron job. I would guess
maybe an hour or two. The process doesn't interfere with the operation of
the volume. It runs in the background as a low priority task, and it can be
stopped at any time. I can't even tell that it's happening; you can
literally just set it up as a cron job once and forget about it forever.

The Arch Linux wiki is the best resource for general Linux best practices
and almost always applies to all Linux distributions:
https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#Scrubbing

Even after a year of always-on server heavy use, my three Seagate Barracuda
7200.14 ST1000DM003-1CH162 1TB drives in RAID 5 have a grand total of zero
data mismatches.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\29@182127 by YES NOPE9

flavicon
face

I have read all the comments.
This has been an extremely useful thread and I thank all of the contributors for their wonderful information sharing.

Best
Gus in Denver
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@031100 by Christopher Head

flavicon
face
part 1 1067 bytes content-type:text/plain; charset="utf-8" (decoded base64)

On Mon, 28 Apr 2014 08:24:30 -0400
"John J. McDonough" <RemoveMEmcdTakeThisOuTspamspamis-sixsigma.com> wrote:

> > You can trust them. You can trust them to scan and inspect your
> > data and freely data mine from it. Best idea would be to 7zip, then
> > encrypt, then upload.
>
> Certainly anything in the cloud should be encrypted, although I would
> avoid proprietary compression/encryption schemes since these things
> tend to disappear.  If you stick with popular, FOSS tools you can
> count on years or decades of warning before they disappear.

7-zip might not be as popular as gzip or bzip2, but it is definitely
open source. Anyway, I would think that, if backup data is being used to
recover from catastrophic hardware failure or site destruction, one
would not need to be digging around and unpacking years-old
backups—hopefully one would have a backup no more than a week or two
old, and thus compressed using tools that were around no more than a
couple of weeks ago and should still be available!
--
Christopher Head

part 2 197 bytes content-type:text/plain; name="ATT00001.txt"
(decoded base64)

--
http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist

2014\04\30@031108 by Christopher Head

flavicon
face
part 1 701 bytes content-type:text/plain; charset="utf-8" (decoded base64)

On Mon, 28 Apr 2014 00:46:16 -0400
veegee <EraseMEveegeespamspamspamBeGoneveegee.org> wrote:

> The new BTRFS filesystem doesn't even need mdadm since it manages
> everything (redundancy, copy on write, etc.) itself. Still new, so I
> prefer to stick with mdadm and LVM until BTRFS is fully stable and
> production tested.

Probably a good call. Personally I use Btrfs for my machines, because I
was getting really annoyed at having to manually manage space in LVM
for snapshots rather than letting the filesystem deal with it. I
haven’t had it eat any data AFAIK, but don’t take that as a
recommendation for everyone to go out and start using it.
--
Christopher Head

part 2 197 bytes content-type:text/plain; name="ATT00001.txt"
(decoded base64)

--
http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist

2014\04\30@031110 by Christopher Head

flavicon
face
part 1 2354 bytes content-type:text/plain; charset="utf-8" (decoded base64)

On Mon, 28 Apr 2014 13:30:47 -0600
YES NOPE9 <RemoveMEyesKILLspamspamnope9.com> wrote:

> My question has to do with preventing slow deterioration of the data
> stored.  I apologize if this has been discussed and I missed it.  
>
> There are many ways to keep multiple copies of important data.  How
> does one insure that the data has not been corrupted and you are
> merely continuing to backup corrupt data.  This may be data you do
> not look at for years.  Is there a file system that manages data
> integrity with some form of checksum ?  ( I mean checksum in the
> generic sense which could include polynomials , etc. )
>
> I have wondered that about operating systems as well.  Is there a
> technique for detecting that operating system files , applications
> and daemons are not slowly corrupting.  I have a old MacIntosh that
> is now forgetting how to use DNS services after running for 2 hours.
> Rebooting it makes it smart again.  One of these times it will not
> reboot.
>
> Gus in Denver

For data at rest, this is less likely than you might think to happen.
All modern hard drives store every disk block with a lot of error
detection and correction codes. If the bits rot a little bit, the drive
will correct them using ECC before the data ever reaches main memory.
If they rot a lot, the drive will report a hard I/O error. It’s very
unlikely that data will be delivered to main memory successfully but be
wrong due to bitrot.

That said, there are known examples of other weird things
happening—for instance, drives have been known to accept data but drop
it on the floor instead of writing it to platter (not even in the face
of power failures, just because of a firmware bug), or accept data and
write it in the wrong place. These two failures would obviously not be
caught by on-drive error detection and correction codes, since in both
cases the data on the disk is intact (it’s just either old or in the
wrong place).

For higher levels of paranoia, yes, Btrfs (and ZFS, according to Matt
Callow, and probably some others) keep checksums of data blocks at the
filesystem level, which would detect any of the above failures. Another
option for rarely-modified data is hashes kept in a separate file with
e.g. md5sum, sha1sum, sha512sum.
--
Christopher Head

part 2 197 bytes content-type:text/plain; name="ATT00001.txt"
(decoded base64)

--
http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist

2014\04\30@031114 by Christopher Head

flavicon
face
part 1 3233 bytes content-type:text/plain; charset="utf-8" (decoded base64)

On Sun, 27 Apr 2014 14:34:31 -0700
Tamas Rudnai <tamas.rudnaiSTOPspamspamspam_OUTgmail.com> wrote:

> Nowadays I tend to lean towards cloud backups to avoid such things as
> well as prevent data loss because of natural disasters (when both
> live data and backup is destroyed because of fire or other cause).
> What is your opinion about that? Shall we trust more on Microsoft,
> Apple or Google to store our data or better to keep it in our
> premises?

I don’t trust cloud storage for either availability or privacy, even
though I pay for it (not just use a free service).

My backup policy isn’t especially complicated, but I think it covers
what I want it to cover.

I have four storage places:
- Primary storage is the drive in my computer which I use every day.
- Secondary storage is an on-site external drive.
- Tertiary storage is a small, physically secure, external drive
 (actually this should be a bunch of drives in different places) which
 is expected to survive site destruction.
- Cloud is a paid account with a cloud provider.

These storage places are used as follows:
- Primary storage holds my live data, encrypted with a password.
- Secondary storage holds backups of primary storage, encrypted with a
 strong random key, as well as that key itself encrypted with a
 password.
- Tertiary storage holds the key encrypted with a password only.
- Cloud storage holds the encrypted backups only.

This system is robust against destruction of any two components, and
some (but not all) cases of three-component destruction:
- If the set of destroyed components does not include primary storage, I
 don’t care.
- If primary and secondary are destroyed, I download backups from the
 cloud and decrypt them using the key from tertiary.
- If primary and tertiary are destroyed, I create a new tertiary using
 keys from secondary.
- If primary and cloud are destroyed, I open an account with a new
 cloud provider and upload backups from secondary.
- If primary, tertiary, and cloud are destroyed, I decrypt backups on
 secondary using the key from secondary to get a new primary, copy the
 keys from secondary to a new tertiary, and copy the backups from
 secondary to a new cloud.

The system maintains privacy in the following ways:
- An attacker going after primary storage must both crack my live
 password and steal the drive to get at the data.
- An attacker going after secondary storage must both crack my backup
 key protection password and steal the drive to get at the data.
- An attacker going after tertiary storage must both crack my backup
 key protection password and steal the drive (in order to be useful
 the drive contains download credentials for the cloud provider, but I
 can quickly revoke them).
- An attacker going after cloud storage must not only gain access to
 the cloud storage system, but must crack the backup encryption key
 which, being a proper, randomly generated encryption key, should be
 much harder than a mere password (cracking a password is insufficient
 because the encryption keys, even password-protected, simply don’t
 exist in the cloud).
--
Christopher Head

part 2 197 bytes content-type:text/plain; name="ATT00001.txt"
(decoded base64)

--
http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist

2014\04\30@114850 by Bob Blick

face
flavicon
face
One thing that adds to hard drive reliability is a small desk fan
blowing on the case of the computer. Works for me on the computer that
runs 24/7/365.

I think all brands of hard drive last longer when cool, and there is
frequently a lot of stagnant air around computers that benefits from
being stirred up.

Bob

-- http://www.fastmail.fm - IMAP accessible web-mail

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@121634 by veegee

flavicon
face
On Apr 30, 2014 12:02 PM, "Bob Blick" <spamBeGonebobblickSTOPspamspamEraseMEftml.net> wrote:
>
> One thing that adds to hard drive reliability is a small desk fan
> blowing on the case of the computer. Works for me on the computer that
> runs 24/7/365.
>
> I think all brands of hard drive last longer when cool, and there is
> frequently a lot of stagnant air around computers that benefits from
> being stirred up.

If I remember correctly, Google recently did a study that showed that hard
drives which ran warmer lasted longer than those which were fan cooled.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@123628 by Carlos Marcano

picon face

I'd really want to read that study, defies every logical and empirical matter.

Regards,

Carlos.

El abr 30, 2014 11:46 AM, veegee <KILLspamveegeespamBeGonespamveegee.org> escribió:
{Quote hidden}

--
http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist

2014\04\30@151307 by veegee

flavicon
face
On Wed, Apr 30, 2014 at 12:35 PM, Carlos Marcano <@spam@c.marcano@spam@spamspam_OUTgmail.com>wrote:

> I'd really want to read that study, defies every logical and empirical
> matter.
>

http://static.googleusercontent.com/media/research.google.com/en//archive/disk_failures.pdf

Article from 2007.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@173905 by Carlos Marcano

picon face
Thanks. Interesting piece. I find some trouble on two things:

* Methodology with control groups - Not very clear in my opinion.
*Conclusion is a bit vague - Acknowledges surprise on the result regarding
temperature as a failure motivator but fails to recognize it's still
important incidence.

Anyway, fairly surprising still.

Regards,

Carlos.



2014-04-30 14:42 GMT-04:30 veegee <spamBeGoneveegeespamKILLspamveegee.org>:

{Quote hidden}

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@175738 by peter green

flavicon
face
Carlos Marcano wrote:
> Thanks. Interesting piece. I find some trouble on two things:
>
> * Methodology with control groups - Not very clear in my opinion.
> *Conclusion is a bit vague - Acknowledges surprise on the result regarding
> temperature as a failure motivator but fails to recognize it's still
> important incidence.
>   I wouldn't read too much into google's temperature results for two reasons.

1: the drives were in servers in a datacenter, this means they have very little data at the upper end of the temperature range (look at how their error bars widen at that point).
2: they were trusting the temperature sensors in the drives themselves. This means that their temperature results would be skewed if a particular model of drive had a systematic error (particually if it over-reported brining it into the range where they have very little data) in it's temperature sensor system.
3: similar concerns also apply if a particular model of drive really does run especially hot under the same conditions as other drives.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@184404 by Bob Axtell

face picon face
I am astonished, actually. Flies into the face of MY experience, big-time.
Heat KILLS electronics. Period.

--Bob A




i
On 4/30/2014 2:57 PM, peter green wrote:
{Quote hidden}

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@194203 by Justin Richards

face picon face
>
> I think all brands of hard drive last longer when cool, and there is
> frequently a lot of stagnant air around computers that benefits from
> being stirred up.
>
> I was using caddy's (external portable enclosure) for 24/7 data and
experienced consistent very early failure rates 6month - 12month.  I felt
inside one after it had been running awhile and discovered it was very hot
to touch, too hot to hold. Whereas drives mounted in PC felt warm. Even
those that were cramped in a SFF case  or running without a case only felt
warm.

No more caddys for long term use for me.  They simply cant breath.

Justin
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@195654 by veegee

flavicon
face
On Wed, Apr 30, 2014 at 6:43 PM, Bob Axtell <TakeThisOuTengineer.....spamTakeThisOuTcotse.net> wrote:

> I am astonished, actually. Flies into the face of MY experience, big-time..
> Heat KILLS electronics. Period.
>

But is it the electronics that fail, or some mechanical part of the drive?
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@201645 by Marcel Duchamp

picon face
What seems to be missing from the discussion is that there will be a bell curve - extremely low temps will kill drives and extremely high temps will kill drives.

Thus, there is some temp between those two that a particular brand and model of drive will last the longest.  No one has indicated what temp that is for which brands and models.  It may well be above room temp for some but others will prefer lower temps.

Without this information, I will go with lower temps until advised otherwise...
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@203733 by louijp

picon face
HD reliability of the electronic part is at least one order of magnitude better than mechanical part. So, a litle degradation of the life expectancy of the electronic is still way ahead of the life expectancy of the mechanical.

Just my $0.02
Jean-Paul
AC9GH



Sent from my Verizon Wireless 4G LTE smartphone

-------- Original message --------
From: veegee <TakeThisOuTveegeeKILLspamspamspamveegee.org> Date:2014/04/30  7:56 PM  (GMT-05:00) To: "Microcontroller discussion list - Public." <.....piclistspamRemoveMEmit.edu> Subject: Re: [EE]:: Hard drive reliability
On Wed, Apr 30, 2014 at 6:43 PM, Bob Axtell <RemoveMEengineerspamspamBeGonecotse.net> wrote:

> I am astonished, actually. Flies into the face of MY experience, big-time..
> Heat KILLS electronics. Period.
>

But is it the electronics that fail, or some mechanical part of the drive?
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@203846 by James Cameron

flavicon
face
When I worked with drive specifications some years ago, there was also
maximum and minimum rates of temperature change, and maximum
thermal cycles.

In one server room, the cooling system was so powerful that the
maximum rate was being exceeded (after an outage), and we had to tweak
the cooling system to slow it down.

-- James Cameron
http://quozl.linux.org.au/
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@225350 by Peter

flavicon
face
I agree with Bob's comment.

The air around the computer needs to be cool and fresh, this not only helps the hard drive but also helps the computer's motherboard as well.
Dust is the other killer.

Peter

On 01/05/2014 1:48 AM, Bob Blick wrote:
{Quote hidden}

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@230224 by Peter

flavicon
face

On 01/05/2014 9:42 AM, Justin Richards wrote:
>> I think all brands of hard drive last longer when cool, and there is
>> frequently a lot of stagnant air around computers that benefits from
>> being stirred up.
>>
>> I was using caddy's (external portable enclosure) for 24/7 data and
> experienced consistent very early failure rates 6month - 12month.  I felt
> inside one after it had been running awhile and discovered it was very hot
> to touch, too hot to hold. Whereas drives mounted in PC felt warm. Even
> those that were cramped in a SFF case  or running without a case only felt
> warm.
>
> No more caddys for long term use for me.  They simply cant breath.
>
> Justin
I agree with Justin.  Caddys long term seem to fail - hard drives just don't like high temperature long term - the cooler the better.
The same is with some USB hard drive designs - heat will also shorten their life as well !
Temperature cycling can also shorten computer's/ hard drives working life.

Peter
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@230853 by RussellMc

face picon face
On 1 May 2014 09:39, Carlos Marcano <spamBeGonec.marcano@spam@spamspam_OUTgmail.com> wrote:

.... Acknowledges surprise on the result regarding

> temperature as a failure motivator but fails to recognize it's still
> important incidence.
>

>
http://static.googleusercontent.com/media/research.google.com/en//archive/disk_failures.pdf

The importance to note is that AS PRESENTED temperature is roughly
inversely correlated with failure rate up to two years of age, anything up
to 40C is OK at 3 years, with 40-45 being bad and over 45 very bad, and
15-30 best, then for 4 years old drives (that have survived) it swaps to
about linear with temperature except that 30-35 is slightly better than
15-30.

Pretty clearly [tm] the data needs more beating than these graphs allow BUT
it seems that running drives in the 35-40 C range overall produces best
results.

Figure 4 confirms this - but what it means needs to be understood. The
columns are proportion of drives at observed temperatures and the line
graph with range markers show failure rates at these various temperatures.

They do not say how or where the drive temperatures are measured.
Presumably this is reported by the drive itself - and that still does not
say exactly what that means - but that information will be available.

It does SEEM that running drives "not too cold" may be in order.

Of course, taking too much notice of specific impressions formed from such
an article is dangerous, as not only is the manner in which it was acquired
and processed not fully known, but the usage patterns of Google may be
different enough from 'your' usage to make a difference.



             Russell
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\04\30@235513 by Peter

flavicon
face

On 01/05/2014 1:02 PM, Peter wrote:
>
> ... - the cooler the better.
I should clarify what I meant by " the cooler the better", there is an optimum working temperature for electronics and there is such a thing as "too cold" and "too hot" for electronics.

Hope that makes my previous post a bit clearer.

Peter
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.


'[EE]:: Hard drive reliability'
2014\05\01@001147 by John Gardner
picon face
....the usage patterns of Google may be
different enough from 'your' usage to make a difference...

Seems likely. As well, the boxcar loads of drives headed for

the smartest guys in the room may receive a bit more scrutiny

before leaving the Skunk Works than my Best Buy prize...

--

Eppur, si muove...
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\05\01@050508 by alan.b.pearce

face picon face
> I am astonished, actually. Flies into the face of MY experience, big-time..
> Heat KILLS electronics. Period.
>
> --Bob A

I'm not totally surprised by the Google results, these things are always on and spinning, so some of the reasons for failure go away (see below).

As to your comment on heat killing electronics - yes it does, but things will often work at high temperatures quite happily - if they are not being temperature cycled. Once up to temperature and running it will often keep running.

As do some of the methods of drive failure, a system I used to service used Micropolis 45MB and 75MB drives (one of them was model number 1325 IIRC, can't remember the other model number). We had a significant number of each of these in systems which were powered on 24/7, and operate satisfactorily almost forever.
However if the system got turned off for some reason the drive would attempt to spin up, and then spin down with a failure. It was often possible to get the drive to work by giving it a rotational shake on the spindle axis, and then once the drive was operational it would stay operational. We learned that the trick was to do a full backup real quick and then replace the drive.

Analysis of failed drives showed that there was a flexible PCB that had the head connections. This went around a plastic block that secured it at the body end, but the shape of the block was such that with the heads in the landing zone the flexible PCB was stretched around a sharpish edge. After years of operation the PCB would get cracks in the tracks, and the stretch around the mounting block when in the landing zone would pull the tracks apart to a point where when attempting to power up again the tracks to the servo head would be open circuit and the drive would power down with a 'failure to find servo track' error.

This was about the only failure mode we saw with these drives. I suspect that many drives have a similar problem as the major failure mode, so if drives are kept in operation 24/7 the failure rate can be pretty low - until you power down.


-- Scanned by iCritical.

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\05\01@053323 by RussellMc

face picon face
On 1 May 2014 21:03, <TakeThisOuTalan.b.pearcespamspamstfc.ac.uk> wrote:

> > I am astonished, actually. Flies into the face of MY experience,
> big-time.
> > Heat KILLS electronics. Period.
> >
> > --Bob A
>
> I'm not totally surprised by the Google results, these things are always
> on and spinning, so some of the reasons for failure go away (see below).
>
> As to your comment on heat killing electronics - yes it does, but things
> will often work at high temperatures quite happily - if they are not being
> temperature cycled. Once up to temperature and running it will often keep
> running.
>

It will depend on the mix of failure modes and how they are temperature
affected.
Flying heads just MAY fly better at lower air densities. Not much
difference between say 25 + 45 C relative to absolute zero (about 300K and
320K =~ 1:1.07) . Enough difference? Don't know.
Lubrication does not slosh around the motor but it is present in the
bearings. Very much depending on the lubricant characteristics, 25 to 45 C
could make a substantial difference, for better or for worse, in two
lubricants. Metal parts expand, lead screws or stepper motors or ... work
oh so slightly differently.

As do some of the methods of drive failure,


Long long long ago in the bad bad bad old days I had a hard drive that
would not start on colder mornings but which ran OK once started. It was
found [tm] that a drive motor disk was accessible from outside the drive.
Flicking this with a finger or even a few good pokes with a pencil were
usually enough to initiate action. Yes, that's a true story :-). (Probably
an ~= 20 MB drive).


Russell
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\05\01@092253 by Isaac Marino Bavaresco

flavicon
face
Em 01/05/2014 06:03, alan.b.pearceEraseMEspamstfc.ac.uk escreveu:
>> I am astonished, actually. Flies into the face of MY experience, big-time.
>> Heat KILLS electronics. Period.
>>
>> --Bob A
> I'm not totally surprised by the Google results, these things are always on and spinning, so some of the reasons for failure go away (see below).
>
> As to your comment on heat killing electronics - yes it does, but things will often work at high temperatures quite happily - if they are not being temperature cycled. Once up to temperature and running it will often keep running.
>
> As do some of the methods of drive failure, a system I used to service used Micropolis 45MB and 75MB drives (one of them was model number 1325 IIRC, can't remember the other model number). We had a significant number of each of these in systems which were powered on 24/7, and operate satisfactorily almost forever.
>
> However if the system got turned off for some reason the drive would attempt to spin up, and then spin down with a failure. It was often possible to get the drive to work by giving it a rotational shake on the spindle axis, and then once the drive was operational it would stay operational. We learned that the trick was to do a full backup real quick and then replace the drive.
>
> Analysis of failed drives showed that there was a flexible PCB that had the head connections. This went around a plastic block that secured it at the body end, but the shape of the block was such that with the heads in the landing zone the flexible PCB was stretched around a sharpish edge. After years of operation the PCB would get cracks in the tracks, and the stretch around the mounting block when in the landing zone would pull the tracks apart to a point where when attempting to power up again the tracks to the servo head would be open circuit and the drive would power down with a 'failure to find servo track' error.
>
> This was about the only failure mode we saw with these drives. I suspect that many drives have a similar problem as the major failure mode, so if drives are kept in operation 24/7 the failure rate can be pretty low - until you power down.


One failure mode I witnessed at least twice, happens when the hard drive
is spinning uninterrupted for years and is turned off, then it cannot
start anymore.

I suspect that it is due to wear in the bearings, but while the bearings
are kept spinning the balls keep aligned on the races by centrifugal
force, but when it stops the balls move slightly from their clean path
and get jammed against metal burrs.

It happened once with a client's drive that contained all their
biometrics database (fingerprints) for over 100,000 people. It happens
that they didn't have a backup policy and when we arrived for a yearly
maintenance the big catastrophe happened.

Happened again at my wife's previous work, they had a server that ran
uninterrupted for one year, then after a programmed shut down one of the
HDs refused to spin. Fortunately they were a government agency with a
proper backup policy.


Isaac

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\05\31@123605 by Luis Moreira

flavicon
face

Hi Guys,
Would like to revisit this thread as I need to get a new hard drive for my
desktop.
At the moment if I need a 1TB to 2 TB
Internal sata hard drive that  will not be terribly expensive what
make/model would you guys recommend?
Can get a couple from Amazon like the Seagate Barracuda 2TB 7200 rpm for
£53.60 which is a very good value, is it any good? There is another one
from WD for simillar price...
I know that this terribly subjective, and please don't start a massive war
of words because of it, I know we all have diferent opinions  and
experiences with different manufacturers, but would like to hear your take
on these drives. As some of you have said before the 5 year warranty seems
to have disappeared,  3 years is probably max you can get.

Best Regards
       Luis
On 1 May 2014 14:30, "Isaac Marino Bavaresco" <RemoveMEisaacbavarescoEraseMEspamspam_OUTyahoo.com.br>
wrote:

{Quote hidden}

--
http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist

2014\05\31@174140 by RussellMc

face picon face
On 1 June 2014 04:36, Luis Moreira <EraseMEluis.moreira1575spam@spam@googlemail.com> wrote:

> At the moment if I need a 1TB to 2 TB
> Internal sata hard drive that  will not be terribly expensive what
> make/model would you guys recommend?



If you read the links I posted on this thread some weeks ago (and add the
discussion that followed for overkill) you will get about as much
information as is likely to be useful.

Based both on what I have observed and on what those links said I'd buy a
WD drive.
That said, I have about 13 external USB connected drives operating  - of
these 2 x Seagate 1 TB drives have given 4y 4m and 4y 11m of service.
These are effectively the same as internal SATA drives but in their own
enclosures.

            Russell
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\05\31@204252 by veegee

flavicon
face

On Sat, May 31, 2014 at 12:36 PM, Luis Moreira <
@spam@luis.moreira1575spam_OUTspam.....googlemail.com> wrote:

{Quote hidden}

You can't go wrong with WD black.

I haven't had any issues at all with the latest Seagate barracuda 1TB and
2TB drives. They're much cheaper and a bit faster last I tested. Been using
them in my always-on server for 6 months now (three of them in RAID5) and
no issues at all. Very happy with these drives.
--
http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist


'[EE]:: Hard drive reliability'
2014\06\01@085810 by John J. McDonough
flavicon
face
On Sat, 2014-05-31 at 17:36 +0100, Luis Moreira wrote:
> Hi Guys,
> Would like to revisit this thread as I need to get a new hard drive for my
> desktop.

I take an entirely different read on this thread than others.  It looks
as if one manufacturer or another cycles to the top in reliability over
time.  The problem is that it takes years to find out who is on top, and
by then, someone else has the crown.

The only guidance I can get from this thread is that I should decide
whether a longer warranty is worth the money, and base my decision on
that.  So, either 1) pay the long dollar for a warranty which will only
cover the physical drive, a tiny part of the actual cost, or 2) make
everything a RAID set and get the cheapest drives possible.  At least
then a failed drive only costs the price of the drive, and not many,
many lost hours.

--McD


-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\01@103958 by David C Brown

picon face
Those alternatives are not the only ones.   Using a good back-up strategy
is another.  And replacing a failed RAID drive and rebuilding the array is
not a zero-time option.


On 1 June 2014 13:58, John J. McDonough <spamBeGonemcdEraseMEspamis-sixsigma.com> wrote:

{Quote hidden}

-- __________________________________________
David C Brown
43 Bings Road
Whaley Bridge
High Peak                           Phone: 01663 733236
Derbyshire                eMail: dcb.homespamBeGonespamgmail.com
SK23 7ND          web: http://www.bings-knowle.co.uk/dcb
<http://www.jb.man.ac.uk/~dcb>
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\01@190952 by RussellMc

face picon face
If you read the links I posted on this thread some weeks ago (and add the
discussion that followed for overkill) you will get about as much
information as is likely to be useful.

Based both on what I have observed and on what those links sid I'd buy a WD
drive. That said I have 2 x Seagate 1 TB drives that have given 4y 4m and
4y 11m  of service.


            Russell


On 1 June 2014 04:36, Luis Moreira <RemoveMEluis.moreira1575@spam@spamspamBeGonegooglemail.com> wrote:

{Quote hidden}

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\01@192004 by RussellMc

face picon face
On 2 June 2014 00:58, John J. McDonough <.....mcdSTOPspamspam@spam@is-sixsigma.com> wrote:

> On Sat, 2014-05-31 at 17:36 +0100, Luis Moreira wrote:
> > Hi Guys,
> > Would like to revisit this thread as I need to get a new hard drive for
> my
> > desktop.
>
> I take an entirely different read on this thread than others.  It looks
> as if one manufacturer or another cycles to the top in reliability over
> time.  The problem is that it takes years to find out who is on top, and
> by then, someone else has the crown.
>

I had not drawn the conclusion that the order of reliability cycled.
If there was any indication that this was so in the material seen so far
I've overlooked it and would be pleased to have it pointed out.

The impression that I formed was:

Hitachi drives are substantially more reliable than WD or SG.
They are not easily available in NZ and cost about twice as much per GB

High spec Seagate drives are relatively reliable but cost a substantial
amount more than their mass market offerings.

Randomly chosen from mass market offerings, WD drives will be significantly
more reliable than SG drives and cost essentially the same in NZ.

Selected SG and WD drives are less reliable in very high data rate
environments due to their agressive ramp down after use power management
approach which means they are frequently go/stop cycle in high use
environments. The reliability of these drives in more typical environments
was not specifically investigated in the survey but can be expected to be
better than in very high data rate environments.

I had personally concluded some years ago that for the mass-market drives
that I buy, WD had fewer failures than Seagate. As ever, YMMV.




                Russell.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\01@195252 by Richard R. Pope

picon face
Russel,
    This is true. Again, you get what you pay for. So you have to ask yourself some questions?  Do you want cheap? How high a quality? What about value? You can't have all three. Pick any two. If you pick cheap the drive will probably not last very long. If you pick very expensive the chances are real good that the drive will last a long time. But this isn't an absolute for random events can happen to cause a drive to fail prematurely. What if it was dropped, a power surge or lightening strike, a short term power failure, your case lacks enough cooling which causes the drive to run hot, the drive was stored in the wrong environment, or it was spiked with static during installation. Value is a combination of quality and price. This is how I determine what I am going to pay for something. I know that if all things are equal, higher price equates to higher quality. If up time is critical you can buy aerospace or medical rated drives but you will really pay for it. It is also certain that the drive will last a very long time. As always your mileage may vary.
    As far as should you buy Hitachi, WD, or SG. I can't say. It depends. It depends on what you want.
Thanks,
rich!
P.S. Backup, backup, backup. If you are uncertain backup some more.
rich!

On 6/1/2014 6:19 PM, RussellMc wrote:
{Quote hidden}

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\02@105530 by Ray Richter

flavicon
face
Well here is my 2 cents. I own a small computer business and I use to sell
everything that most computer stores would sell. Several years ago we
decided to focus on SMBs (Small & Medium Businesses) why because SMBs aren't
as cheap as regular home consumers. SMBs realize that the data on their
computers is worth a lot more than the hardware. Places like accounting
firms know that the data is their business, loose the data, close the
business. I use to sell Fujitsu, Seagate, WD, Maxtor, plus Hitachi and
Samsung sometimes as specials. What I learned was this; All manufacturers
make good ones and bad ones except I found Hitachi and Samsung to be
consistently on the bad side. Fujitsu had the best service for warranty
items. Warranty items for Hitachi was good but only when used by IBM and IBM
handled the warranty. So now we're down to basically Seagate and WD.

To keep drives alive the best thing one can do is keep them cool, Period. I
have WD drives that have been in service for 16 years. Use RAID 1 (mirror)
whenever possible, if you need more speed use faster drives or a faster RAID
(RAID 5) but not RAID 0. Here is an example of how reliability can be
strange, I can't remember the model number but, I remember it was a WD
series. The 2 platter drive was excellent but the 3 platter drive had lots
of problems and like any problem product I stopped selling it. So which are
the best overall and not by lot number; at the top of the list are the WD
VelociRaptor drives, I have only had one fail and they are fast. Next would
be the WD enterprise drives (WD Re), then the Seagate enterprise drives,
enterprise drives are built for 24/7/365. After this are the desktop drives
and once again it would be WD (Black) then Seagate. As for warranty, WD has
better service and as for who has to send it in depends on the dealer. I
will handle the return if it is in the first year for home computers that
are under my warranty and for the manufacturers warranty period for SMBs.

As for SSD drives I haven't been selling them enough or long enough for a
real good opinion yet. SMBs are still shy about them and some gamers have
found that some of the drives slow down a bit with age. Also some older
computers will not boot using a SSD, depends on the BIOS. My best advice is
if you find a drive you like and think it will work, research it for MTBF
(Failure Rate), research the model and not the manufacturer.

Ray


> {Original Message removed}

2014\06\04@124753 by alan.b.pearce

face picon face
> As for SSD drives I haven't been selling them enough or long enough for a
> real good opinion yet. SMBs are still shy about them and some gamers have
> found that some of the drives slow down a bit with age. Also some older
> computers will not boot using a SSD, depends on the BIOS. My best advice is
> if you find a drive you like and think it will work, research it for MTBF
> (Failure Rate), research the model and not the manufacturer.

Maybe this is one to look at ...
http://www.theregister.co.uk/2014/06/04/soupedup_sandisk_ssd_sashays_onto_stage/



-- Scanned by iCritical.

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\04@155829 by John Ferrell

face
flavicon
face
I ordered two Seagate 1T hybrid drives by mistake. They were intended to be backup drives.
I was shocked to find they were laptop sized drives. A little online searching revealed they are commonly used in all-in-one machines.
I cloned my C: drive and installed one. At first, I could see little if any difference. After a day or two the machine seemed to be a lot faster.
There was no special setup on my part.
I don't know, but I speculate:
If the drive is not spinning, in sleep mode, there is no wait time for common activity in the SS area.
I suspect the firmware uses the algorithms developed for virtual storage, such as paging out the Least Recently Used pages to allocate SS store.
I think they were a good deal at about $100 each. I intend to put the other one in my Cheap HP laptop. Those laptops see to be good for at least three years for me.
The problem with basing decisions on history is that those machines are not what are available in today's market!
There was a time when all of the failures were WD's.
If you use disk drives I think it wise to be paranoid...

On 6/4/2014 12:47 PM, alan.b.pearceEraseMEspam@spam@stfc.ac.uk wrote:
{Quote hidden}

-- John Ferrell W8CCW

"Travel is fatal to prejudice, bigotry and narrow-mindedness”
 Mark Twain


-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\05@072056 by John J. McDonough

flavicon
face
On Wed, 2014-06-04 at 16:47 +0000, RemoveMEalan.b.pearcespamspamBeGonestfc.ac.uk wrote:
> > As for SSD drives I haven't been selling them enough or long enough for a
> > real good opinion yet.

SSD drives have a limited number of writes, although they can be quite
fast.  The Fedora ARM project at Seneca has a large number of SSD drives
in their compile farm.  They are replaced every six months because they
know they will fail soon after.  Even so, the performance improvement is
worth it to them.

This is obviously a very demanding application; essentially they are
writing 24x7.  But it does point out the somewhat different place solid
state storage holds next to rotating storage.

It also make me wonder a bit about hybrid drives. They help with the
performance/price tradeoff, but the solid state part is going to get
written a lot since virtually all writes to the magnetic media are going
to be written to flash first.

I suspect for most home use the flash life is going to be essentially
infinite.  But for more demanding applications, remember flash can stand
a limited number of writes.

--McD


-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\05@073939 by alan.b.pearce

face picon face
> On Wed, 2014-06-04 at 16:47 +0000, spamBeGonealan.b.pearceKILLspamspam@spam@stfc.ac.uk wrote:
> > > As for SSD drives I haven't been selling them enough or long enough
> > > for a real good opinion yet.
>
> SSD drives have a limited number of writes, although they can be quite fast.

The general 'figure of merit' used seems to be 'full drive writes/day' or /week or /month depending on the manufacturer and the drive size/speed.
-- Scanned by iCritical.

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\05@084143 by Richard R. Pope

picon face
John,
    I don't know about the life of a SSD or Hybrid drive that is being used in a consumer setting with a gamer or someone like myself who leaves their computers on for extended periods of time. A lot of gamers will run their computers for 18 hours or longer at a time. I leave my computer running pretty much 24/7 even though the actual usage might be only eight hours or so. Remember windows writes to the swap partition whether or not the system is being used. If the system is being left on for most of the time and it is not setup to go into low power mode the hybrid drive will be continuously written to by windows. I would suspect that under these conditions a hybrid drive will probably fail in that six month time frame.
    So I would recommend several choices to make. Only turn the system on when you are going to use it. Of course this shortens the life of the whole system do to the in rush current on start up and you have to wait for the system to boot up. Set the system up to drop into low power mode after it hasn't been used for a while. But you will have to put up with the couple of minutes or so while the system is coming alive and you still have some high in rush currents as the hard drives spin back up but the rest of the system doesn't suffer from these surges. Set up the swap drive on a separate drive that is not a hybrid and don't have any swap partitions on any hybrid drives.
    I don't know enough about Linux to address these points. Does Linux continue to write to the swap drive even though the computer is not being used? I would like to know.
Thanks,
rich!

On 6/5/2014 6:20 AM, John J. McDonough wrote:
{Quote hidden}

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\05@092549 by David C Brown

picon face
I have been running win7 off a 64G SSD in much the same way that you
describe - on 24/7, used 4-8 hours a day - for just over eighteen months
without problems.

I let windows manage the pagefile on the SSD and ensured that TRIM was set
and, o course, disable defrag on the SSD


On 5 June 2014 13:41, Richard R. Pope <mechanic_2spam_OUTspam@spam@charter.net> wrote:

{Quote hidden}

-- __________________________________________
David C Brown
43 Bings Road
Whaley Bridge
High Peak                           Phone: 01663 733236
Derbyshire                eMail: spamBeGonedcb.home@spam@spamgmail.com
SK23 7ND          web: http://www.bings-knowle.co.uk/dcb
<http://www.jb.man.ac.uk/~dcb>
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\05@094426 by Richard R. Pope

picon face
David,
    That makes me wonder if that company has another under lying problem with their system. If their drives fail after six  months or so maybe there is a power supply or cooling problem. Hum, just a thought.
Thanks,
rich!
P.S. Some of my drives are almost 6 years old. They have a lot of miles on them. The only important stuff is on my records and picture drives. That data is copied across several drives just in case. I used to use tape but with my drives averaging 300GB in size this became impractical.
Thanks,
rich!

On 6/5/2014 8:25 AM, David C Brown wrote:
> I have been running win7 off a 64G SSD in much the same way that you
> describe - on 24/7, used 4-8 hours a day - for just over eighteen months
> without problems.
>
> I let windows manage the pagefile on the SSD and ensured that TRIM was set
> and, o course, disable defrag on the SSD
>
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\05@120819 by Peter Johansson

picon face
On Thu, Jun 5, 2014 at 8:41 AM, Richard R. Pope <RemoveMEmechanic_2EraseMEspamKILLspamcharter.net> wrote:

>      I don't know about the life of a SSD or Hybrid drive that is being
> used in a consumer setting with a gamer or someone like myself who
> leaves their computers on for extended periods of time. A lot of gamers
> will run their computers for 18 hours or longer at a time. I leave my
> computer running pretty much 24/7 even though the actual usage might be
> only eight hours or so. Remember windows writes to the swap partition
> whether or not the system is being used. If the system is being left on
> for most of the time and it is not setup to go into low power mode the
> hybrid drive will be continuously written to by windows. I would suspect
> that under these conditions a hybrid drive will probably fail in that
> six month time frame.

You would do well to actually profile your disk activity (under both
demand and idle states) before deciding how to utilize your SSD space.
In most cases writes to swap are relatively low and most users
benefit greatly from having swap on SSD.  (If you are doing a lot of
writing to swap, you should consider a RAM upgrade regardless.)

The biggest thing you need to worry about when using SSDs is write
amplification.  If you are not familiar with the term, you would do
well to read up on it.

-p.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\07@192416 by John J. McDonough

flavicon
face
On Thu, 2014-06-05 at 07:41 -0500, Richard R. Pope wrote:
> John,
>      I don't know about the life of a SSD or Hybrid drive that is
being
> used in a consumer setting with a gamer or someone like myself who
> leaves their computers on for extended periods of time. A lot of
gamers
> will run their computers for 18 hours or longer at a time. I leave my
> computer running pretty much 24/7 even though the actual usage might
be
> only eight hours or so. Remember windows writes to the swap partition
> whether or not the system is being used. If the system is being left
on
> for most of the time and it is not setup to go into low power mode
the
> hybrid drive will be continuously written to by windows. I would
suspect
> that under these conditions a hybrid drive will probably fail in that
> six month time frame.
Windows USED to write several other files regularly as well. But I
assume that newer versions of Windows either don't do this, of use a
filesystem that doesn't hammer the same physical sectors.


>      I don't know enough about Linux to address these points. Does
Linux
> continue to write to the swap drive even though the computer is not
> being used? I would like to know.

Linux does not.  However, the way people use Linux systems, there could
be more or less continuous writes in some cases, but generally that is
at the user's discretion.  What is going on "under the covers" in Linux
is a lot more transparent than Windows.

Linux has a range of filesystems available with different
characteristics.  Recent versions include some "flash friendly"
filesystems that specifically avoid that problem.


--McD


-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\07@194352 by John J. McDonough

flavicon
face
On Thu, 2014-06-05 at 08:44 -0500, Richard R. Pope wrote:
> David,
>      That makes me wonder if that company has another under lying
> problem with their system. If their drives fail after six  months or so
> maybe there is a power supply or cooling problem. Hum, just a thought.

The problem is their application writes 24/7.  The SSDs are part of a
storage array that services dozens of small ARM systems.  Senaca hosts
the build system for the Fedora distribution for ARM.  Fedora has
somewhere north of 15,000 packages, releases every six months, and it
takes lots of builds for a release.  Fedora rules require that the
packages be compiled on the target system, hence no cross-compiling
which would be a lot faster.  Compiling 15,000 packages on a Beagle
Board takes some time, hence the large number of systems.

They don't see reliability issues with magnetic drives, but in their
application, the performance improvement is worth the cost.  Obviously,
being a university they ran the tests, did the math, and the SSDs won,
even though they have a limited life.  And they don't wait for failures.
They remove them from service before they are expected to fail because
they don't want the downtime.  They also tested a number of
manufacturers a couple years ago, I don't know whether they have gone
back and re-run the tests with more recent drives.  They did a
presentation at FUDcon Blacksburg, I suspect a video might still be
online.

If you recall, back in the PIC16F84/84A, 628/628A days there were some
pretty dramatic differences in flash life between different PICs.  Seems
like there is a tradeoff between life, speed and manufacturing cost,
although I'm not familiar with the manufacturing process in any level of
detail.  But some PICs had 100 or 1000 times the flash life of others,
and it wasn't always the newer PICs with the longer life.  But then,
number of flash writes isn't that big a deal with PICs - even a hobbyist
is unlikely to program a particular PIC 1000 times, and I seem to recall
most were in the 10,000 to 100,000 write range.

--McD


-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\08@212342 by Richard R. Pope

picon face
John,
    Thanks for answering my question about whether Linux writes to Swap even when the system is not being used. I suspected as much but I didn't know. I would switch to Linux or Amiga but they don't support certain games that I like to play. Again thanks for the answer.
rich!
And for those who didn't read the earlier post, Linux only writes to swap when the system is being used unlike Windows.
rich!
On 6/7/2014 6:24 PM, John J. McDonough wrote:


---
This email is free from viruses and malware because avast! Antivirus protection is active.
http://www.avast.com

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\09@170846 by John Ferrell

face
flavicon
face
I have never considered this in the past. Now that you bring it up it seems to me that a separate drive for paging would be a good idea. If we must beat a drive to death, I would prefer that it not be the C: drive!

On 6/5/2014 8:41 AM, Richard R. Pope wrote:
>   I leave my
> computer running pretty much 24/7 even though the actual usage might be
> only eight hours or so. Remember windows writes to the swap partition
> whether or not the system is being used. If the system is being left on
> for most of the time and it is not setup to go into low power mode the
> hybrid drive will be continuously written to by windows. I would suspect
> that under these conditions a hybrid drive will probably fail in that
> six month time frame.

-- John Ferrell W8CCW

“Guns are a lot like parachutes
  If you need one and don't have one,
   you'll probably never need one again"


-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\10@172402 by Carlos Marcano

picon face

I thought I should post this from Backblaze (please forbid if it has
already been posted):

<http://blog.backblaze.com/2014/05/12/hard-drive-temperature-does-it-matter/
>

Regards,

Carlos.



2014-06-09 16:38 GMT-04:30 John Ferrell <spamBeGonejferrell13spam_OUTspamRemoveMEtriad.rr.com>:

{Quote hidden}

--
http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist

2014\06\11@065523 by Justin Richards

face picon face
On 11 June 2014 05:24, Carlos Marcano <.....c.marcanospamRemoveMEgmail.com> wrote:

> I thought I should post this from Backblaze (please forbid if it has
> already been posted):
>
> <
> blog.backblaze.com/2014/05/12/hard-drive-temperature-does-it-matter/
> >
>
Backblaze appear to be doing an excellent job keeping their drives cool.

It would be interesting to see failure rates over a wider range of
temperature.  That may prove to be an expensive exercise and suspect no one
would want to expose a significant number of their drives to higher
temperatures to make the data valid.

Would be interesting if they were willing to sacrifice some older pods and
expose them to higher temperatures.

I am not sure I agree with "How much does operating temperature affect the
failure rates of disk drives? Not much."

I think the only conclusion is:- not much over the temperatures theirs are
run at.

Justin
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\11@081006 by Justin Richards

face picon face
I have been on the look out for new drives and Hitachi look like the clear
winner from backblaze.  I checked out whats available.

1 item on Ebay

Brand New Hitachi/HGST Deskstar Internal Hard Drive 4TB/4000GB

HGST (formerly Hitachi Global Storage Technologies)

Wiki indicates HGST now owned by WD.

So can we expect the same hitachi quality

Justin
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\11@102228 by Harrison Cooper

flavicon
face
Interesting data, thanks for sharing.  Some servers do a good job pushing air, and it depends on where the drives are mounted of course.  Ours usually end up in the back side getting the full brunt of hot air from CPU's and DRAM.  Facebook actually puts ours in front, so we get the full effect of the "cool" air.  
{Original Message removed}

2014\06\11@111459 by Peter Johansson

picon face
On Wed, Jun 11, 2014 at 6:55 AM, Justin Richards
<justin.richardsspam@spam@gmail.com> wrote:

> Backblaze appear to be doing an excellent job keeping their drives cool.
>
> I am not sure I agree with "How much does operating temperature affect the
> failure rates of disk drives? Not much."
>
> I think the only conclusion is:- not much over the temperatures theirs are
> run at.

Indeed.  While this conclusion may be appropriate for datacenter
installations with high-volume rackmount case fans and dedicated
cooling (chillers) in no way does it apply to most home set-ups.  Even
a single datacenter case will sound like a jet engine in a quiet home
office.

It is also worth noting that datacenter temperatures are on the rise.
I suspect the datacenter under test was uncommonly cool even for the
time of the test and even moreso today.

-p.
-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\12@120637 by Michael Graff

flavicon
face
I seem to remember a study from Google which indicated that drive failures were loosely correlated to temperature and strongly correlated to manufacturing batch.

--Michael



> On Jun 11, 2014, at 9:22 AM, Harrison Cooper <EraseMEHCooperRemoveMEspamSTOPspamfusionio.com> wrote:
>
> Interesting data, thanks for sharing.  Some servers do a good job pushing air, and it depends on where the drives are mounted of course.  Ours usually end up in the back side getting the full brunt of hot air from CPU's and DRAM.  Facebook actually puts ours in front, so we get the full effect of the "cool" air.  
>
> {Original Message removed}

2014\06\12@124339 by Michael Graff

flavicon
face
www.youtube.com/watch?v=tDacjrSCeq4

(1)  Don't yell at hard drive arrays.
(2)  If you have multiple drives, get server grade or NAS grade which can handle the vibrations.

http://static.googleusercontent.com/media/research.google.com/en/us/archive/disk_failures.pdf

(1)  Failure rates are highly correlated with drive models, manufacturers and vintages.
(2)  Drive temperature is loosely correlated to drive failure.
(3)  SMART monitoring is basically useless as a predictor of failure.

--Michael



> On Jun 11, 2014, at 9:22 AM, Harrison Cooper <RemoveMEHCooperKILLspamspamTakeThisOuTfusionio.com> wrote:
>
> Interesting data, thanks for sharing.  Some servers do a good job pushing air, and it depends on where the drives are mounted of course.  Ours usually end up in the back side getting the full brunt of hot air from CPU's and DRAM.  Facebook actually puts ours in front, so we get the full effect of the "cool" air.  
>
> {Original Message removed}

2014\06\16@013159 by Richard R. Pope

picon face
John,
    I have been doing this for years. If the system uses PATA it is important that the swap drive is connected to a channel other than the one that the system drive is on. A PATA channel can't multitask. So if both drives are on the same channel one drive must finish what is being done before the system can access the other drivel. SATA and SCSI don't have this problem.
Thanks,
rich!

On 6/9/2014 4:08 PM, John Ferrell wrote:
{Quote hidden}

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\16@133418 by John Ferrell

face
flavicon
face
I must confess to ignorance on this subject. I will check out "PATA".  I too, am an old timer and have never really become a Windows fan.
As a user,  All things related to Windows lives on the System HDD, which is usually C:.
Whatever paging goes on is Windows and also on that drive. Be it good or bad, I will find out over time with my Hybrid drive if it was a good idea.
I really do prefer reliability over speed.
I will return to lurking mode...

On 6/16/2014 1:31 AM, Richard R. Pope wrote:
{Quote hidden}

-- John Ferrell W8CCW
   Julian NC 27283


-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

2014\06\16@180501 by alan.b.pearce

face picon face
> I must confess to ignorance on this subject. I will check out "PATA".

This used to refer to 'Parallel ATA' drives, i.e. the original IDE drives that use a 40 way ribbon cable for connection, to distinguish them from SATA, or 'Serial ATA' drives that are now used and can have more than two drives connected to a controller.
-- Scanned by iCritical.

-- http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive
View/change your membership options at
mailman.mit.edu/mailman/listinfo/piclist
.

More... (looser matching)
- Last day of these posts
- In 2014 , 2015 only
- Today
- New search...