Searching \ for '[EE]:: SATA transfer rate' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/index.htm?key=sata+transfer+rate
Search entire site for: ': SATA transfer rate'.

Exact match. Not showing close matches.
PICList Thread
'[EE]:: SATA transfer rate'
2007\07\10@084022 by Russell McMahon

face
flavicon
face
I'm copying data from an almost full 320 GB Seagate SATA drive to a
320GB WD SATA drive. WD is new. Seagate has been extensively used.
Processor is Pentium D (dual core) 2.8 GHz, 1 GB RAM. Both drives are
NTFS. Transfer is using XXCOPY in a DOS box, which is usually about as
fast as any other means. [[I avoid Windows basic copy as its not easy
to restart in the middle if it crashes out and such a large transfer
always crashes out for any number of inadequate reasons. No doubt
there are Windows level utilities that do this with proper resume.]].
Motherboard has 4 x SATA ports and there are 4 x 320 GB drives on the
system (including the above 2) plus an 80 GB IDE drive. No other user
activity is taking place. PC is LAN connected. Not acting as a server
or being accessed on the LAN.

What sort of data transfer rate would people expect?

I'm getting something around 2.5 to 3 MB/second (won't know for sure
until finished). This seems far too slow. It's been running for about
24 hours now and it is almost complete. Setting task priority high
didn't seem to help. Neither did playing with affinity for other
tasks. I've shut down all tasks that are obviously unnecessary and no
other applications are running.



           Russell


2007\07\10@085026 by Peter Bindels

picon face
Hi Russell,

On 10/07/07, Russell McMahon <spam_OUTapptechTakeThisOuTspamparadise.net.nz> wrote:
> I'm copying data from an almost full 320 GB Seagate SATA drive to a
> 320GB WD SATA drive. WD is new. Seagate has been extensively used.
> Processor is Pentium D (dual core) 2.8 GHz, 1 GB RAM. Both drives are
> NTFS. Transfer is using XXCOPY in a DOS box, which is usually about as
> fast as any other means. [[I avoid Windows basic copy as its not easy
> to restart in the middle if it crashes out and such a large transfer
> always crashes out for any number of inadequate reasons. No doubt
> there are Windows level utilities that do this with proper resume.]].
> Motherboard has 4 x SATA ports and there are 4 x 320 GB drives on the
> system (including the above 2) plus an 80 GB IDE drive. No other user
> activity is taking place. PC is LAN connected. Not acting as a server
> or being accessed on the LAN.
>
> What sort of data transfer rate would people expect?

I would expect about 30-40 MB/s, but you're not copying it intelligently.

Do you want to:
- Backup the full 320GB on an equal filesystem?
- Copy the 320GB to another disk, automatically defragmenting along
the way, but to the same filesystem?
- Copy the 320GB to another disk to another filesystem?

If the second or third, you're using the right way. If you want to do
the first (and if I understand this correctly, you do) you don't want
a file-level copy but a disk-level copy. Try DD or something similar.
DD is also available for windows with raw disk access.

Alternatively, if you do want to do #2 or #3, try rebooting to Linux.
It has a faster driver for most filesystems, although it disrecommends
writing to NTFS (because it's mindbogglingly complex, undocumented and
everchanging). Copying my 250GB disk to a pair of 250GB disks (all
SATA) copied about 200GB of data in 2-4 hours.

2007\07\10@085918 by Dario Greggio

face picon face
Russell McMahon wrote:

> What sort of data transfer rate would people expect?

I'd say the 150MBit/s should be somewhat guaranteed with that hardware,
so it would make some 15MByet/sec.
The ATA133 was working good enough. Basic SATA are close to that.

Considering FAT overhead, it could slow down a little or a lot,
depending on average file sizes.
Also, cache memory for file system should do some difference.

--
Ciao, Dario

2007\07\10@091159 by Gerhard Fiedler

picon face
Russell McMahon wrote:

> I'm copying data from an almost full 320 GB Seagate SATA drive to a 320GB
> WD SATA drive. WD is new. Seagate has been extensively used. Processor
> is Pentium D (dual core) 2.8 GHz, 1 GB RAM. Both drives are NTFS.
> Transfer is using XXCOPY in a DOS box, which is usually about as fast as
> any other means. [...] Motherboard has 4 x SATA ports and there are 4 x
> 320 GB drives on the system (including the above 2) plus an 80 GB IDE
> drive. No other user activity is taking place. PC is LAN connected. Not
> acting as a server or being accessed on the LAN.
>
> What sort of data transfer rate would people expect?

Two standard IDE 7.2krpm disks with NTFS on a 350MHz PII, running Win2k
without any special accelerating measures, I get typically ~20MB/s.

What you should get depends mostly on the drives (rotation speed, number of
surfaces) and on how fragmented the files are. For such transfers, the
limiting factor is in most cases the data transfer on and off the disks;
the rest of the system should be able to cope with that rate.

See e.g. <http://www.pcguide.com/ref/hdd/perf/perf/spec/transSTR-c.html>.

> I'm getting something around 2.5 to 3 MB/second (won't know for sure
> until finished). This seems far too slow.

I agree.

Gerhard

2007\07\10@091834 by Tony Smith

picon face
> I'm copying data from an almost full 320 GB Seagate SATA
> drive to a 320GB WD SATA drive. WD is new. Seagate has been
> extensively used.
> Processor is Pentium D (dual core) 2.8 GHz, 1 GB RAM. Both
> drives are NTFS. Transfer is using XXCOPY in a DOS box, which
>
> What sort of data transfer rate would people expect?
>
> I'm getting something around 2.5 to 3 MB/second (won't know
> for sure until finished). This seems far too slow. It's been
> running for about
> 24 hours now and it is almost complete. Setting task priority
>


I Ghosted a drive that size the other day (both Seagates, I think), and it
only took about 3 hours.  Certainly wasn't all day.  I wandered off to do
something more interesting, but the transfer rate was showing around
2GB/minute, say 33MB/second.

That wasn't SATA, that was IDE, 80-pin connector.  Slower PC, 1GHz?, but
that's real DOS though, not a cmd shell.

I expect SATA to do better, something must be broken...

Tony

2007\07\10@091859 by Lee Jones

flavicon
face
>> What sort of data transfer rate would people expect?

> I'd say the 150MBit/s should be somewhat guaranteed with that
> hardware, so it would make some 15MByet/sec.

SATA I is 1.5 gigabit per second (150 megabyte per second)
from drive to controller.

SATA II is 3.0 gigabit per second (300 megabyte per second).

> The ATA133 was working good enough. Basic SATA are close to that.

Basic SATA I should be faster than ATA/100 or ATA/133.

Maybe the motherboard has a bottleneck between the internal SATA
controller and the system and/or memory bus.

Or you're being hosed by the OS.
                                               Lee Jones

2007\07\10@092938 by Dario Greggio

face picon face
Lee Jones wrote:

> SATA I is 1.5 gigabit per second (150 megabyte per second)
> from drive to controller.
[...]

Yeah, Lee: you're right, this is what I read on SATA disks too.
It's simply that I could not believe they were "this much faster" than
ATA, since their price is comparable...

What if SATA was in bits per second, and ATA in bytes per second??
Sometimes these "misunderstanding" happen in electronics... !

--
Ciao, Dario

2007\07\10@093401 by Hector Martin

flavicon
face
Dario Greggio wrote:
> Russell McMahon wrote:
>
>> What sort of data transfer rate would people expect?
>
> I'd say the 150MBit/s should be somewhat guaranteed with that hardware,
> so it would make some 15MByet/sec.
> The ATA133 was working good enough. Basic SATA are close to that.

SATA is 1.2Gbit/s or around 150MB/s (twice that for SATA-300, which most
newer drives and motherboards handle). However, that rate is impossible
to achieve (unless you're reading cached data), since the actual hard
drive platters are *much* slower.

On my box (Athlon64 3000+), using Hitachi 80GB drives on a SATA-150
motherboard, I get 30MB/s raw read speed. Factoring in filesystem
overhead, it's probably closer to 25MB/s. If you copy the raw partition
using DD, you'll get the full 30MB/s.

2-3MB/s is much too slow. Check your DMA status - Windows sometimes
randomly decides to disable DMA on a hard drive, and the only way to
reenable it is to screw around with the registry. I'd try a Linux
Live-CD. Use hdparm -t /dev/sda (substitute in whatever your HDD device
name is) to get a benchmark on read speed. If it's low, you probably
have a hardware limitation. If it's high, then windows is screwing you over.

--
Hector Martin (.....hectorKILLspamspam@spam@marcansoft.com)
Public Key: http://www.marcansoft.com/marcan.asc

2007\07\10@113801 by Robert Rolf

picon face
You said you were using XXCOPY 'in a DOS Box'.
All I/O in a DOS box is virtualized, so the context switching is killing you.

http://www.acronis.com/ has some decent tools for copying and resizing drives,
with free demo downloads good for 30 days.

R

Hector Martin wrote:

{Quote hidden}

2007\07\10@114211 by Hector Martin

flavicon
face
Dario Greggio wrote:
> Lee Jones wrote:
>
>> SATA I is 1.5 gigabit per second (150 megabyte per second)
>> from drive to controller.
> [...]
>
> Yeah, Lee: you're right, this is what I read on SATA disks too.
> It's simply that I could not believe they were "this much faster" than
> ATA, since their price is comparable...
>
> What if SATA was in bits per second, and ATA in bytes per second??
> Sometimes these "misunderstanding" happen in electronics... !

ATA is 133MB/s. SATA is 150MB/s. SATA-300 is 300MB/s. It's not much
better than ATA, especially for the older SATA-150 variety. The big
benefits are the much more manageable cabling, the dedicated pipe to
each hard drive, better standarization, etc.

These numbers are irrelevant for most situations, since the speed of the
actual platters tops out at around 40-80MB/s for most current 7200RPM
drives. The beginning of the disk is faster than the end, since drives
use a number of density zones and put more data on the outer edge of the
disk, which is the start of the data on hard drives (this works the
exact opposite way with CDs and DVDs). This hasn't changed with SATA.
SATA drives aren't "much faster" than their ATA counterparts. They're
somewhat faster, but not by that much.

SATA is great for RAID and the like though. With PATA, two drives on one
channel share the same cable, and thus split the bandwidth. 66MB/s does
cause a bottleneck with newer drives, and that's assuming ideal
conditions otherwise. With SATA, each hard drive gets its own cable, so
they can both operate at maximum speed.

--
Hector Martin (hectorspamKILLspammarcansoft.com)
Public Key: http://www.marcansoft.com/marcan.asc

2007\07\10@115420 by Dario Greggio

face picon face
Hector Martin wrote:

> SATA is great for RAID and the like though. [...[

Hi Hector, yes, I agree about this all. It's consistent with all I've
learned on the field till now.


--
Ciao, Dario

2007\07\10@124445 by Hector Martin

flavicon
face
Robert Rolf wrote:
> You said you were using XXCOPY 'in a DOS Box'.
> All I/O in a DOS box is virtualized, so the context switching is killing you.

I though DOS boxes in Windows 2000 and XP were really just system
command lines. I'm sure I/O is virtualized for DOS applications, but
does this also affect native 32-bit apps? If XXCOPY is a proper Windows
32-bit app, it should work just like any GUI app, I'd assume?

--
Hector Martin (.....hectorKILLspamspam.....marcansoft.com)
Public Key: http://www.marcansoft.com/marcan.asc

2007\07\10@130059 by Dario Greggio

face picon face
Hector Martin wrote:

> If XXCOPY is a proper Windows
> 32-bit app, it should work just like any GUI app, I'd assume?

Yes, if it a console Windows Application, it should behave like a
Windows piece of software

2007\07\10@150739 by Mark Rages

face picon face
On 7/10/07, Lee Jones <EraseMEleespam_OUTspamTakeThisOuTfrumble.claremont.edu> wrote:
> >> What sort of data transfer rate would people expect?
>
> > I'd say the 150MBit/s should be somewhat guaranteed with that
> > hardware, so it would make some 15MByet/sec.
>
> SATA I is 1.5 gigabit per second (150 megabyte per second)
> from drive to controller.
>
> SATA II is 3.0 gigabit per second (300 megabyte per second).
>

In practice, you never get close to these numbers on an extended copy.
The bottleneck is getting the data off the platters.

Drive manufacturers will generally not tell you the sustained data
transfer rate.

Regards,
Mark
--
Mark Rages, Engineer
Midwest Telecine LLC
markragesspamspam_OUTmidwesttelecine.com

2007\07\10@165114 by Dr Skip

picon face
Here's a little trick, even for the first drive (C). If Windows disables
DMA, it isn't easy to reenable it because it does that after a certain
number of errors over a period of time. It then assumes the drive can't
handle DMA (at whatever level) and shuts it off. In control panel,
uninstall the physical drive. It will tell you it will happen at reboot.
Reboot. It has wiped the status bits and such upon removal, and for a
normally removed piece of hardware, it would be gone on reboot. However,
removing the hardware doesn't affect the boot sector or files on the
drive, so when it reboots, BIOS boots the drive, Windows starts, it
'sees' your 'new' C drive (or other drive) and reinstalls it. All is reset.

Always works here, but as they say, YMMV. ;-)

-Skip


Hector Martin wrote:
> 2-3MB/s is much too slow. Check your DMA status - Windows sometimes
> randomly decides to disable DMA on a hard drive, and the only way to
> reenable it is to screw around with the registry.
>  

2007\07\10@224055 by Josh Koffman

face picon face
On 7/10/07, Dario Greggio <@spam@adpm.toKILLspamspaminwind.it> wrote:
> Hector Martin wrote:
>
> > If XXCOPY is a proper Windows
> > 32-bit app, it should work just like any GUI app, I'd assume?
>
> Yes, if it a console Windows Application, it should behave like a
> Windows piece of software

Could it be you're running the wrong console? I seem to recall that
there's a difference between doing start/run/command and
start/run/cmd.

Just a thought.

Josh
--
A common mistake that people make when trying to design something
completely foolproof is to underestimate the ingenuity of complete
fools.
       -Douglas Adams

2007\07\11@075811 by Gerhard Fiedler

picon face
Josh Koffman wrote:

>> Yes, if it a console Windows Application, it should behave like a
>> Windows piece of software
>
> Could it be you're running the wrong console? I seem to recall that
> there's a difference between doing start/run/command and
> start/run/cmd.

There sure is. That's the thing with talking about a "DOS box".

The WinNT+ cmd.exe (almost) looks like the DOS box of yore, but isn't --
it's a different, and a full Windows, application. No difference to running
other normal Windows applications.

The command.com is a different beast. It's not a Win32 application, and it
is severely limited compared to cmd.exe. I don't really know why it's there
at all. Possibly for some odd enterprise batch file that uses some strange
feature and doesn't run on cmd.exe.

They are easily distinguishable, though. The first few lines look like

 Microsoft(R) Windows DOS
 (C)Copyright Microsoft Corp 1990-2001.

 Microsoft Windows XP [Version 5.1.2600]
 (C) Copyright 1985-2001 Microsoft Corp.

While the former proclaims itself as a "DOS box", the latter says it's an
"XP box" :)

(On a side note... I wonder why the copyright notice of cmd.exe goes
farther back than the one of command.com. Historically it's probably
younger. I don't think this is relevant for copyright reasons, though.)

Gerhard

2007\07\11@084629 by Hans Ruopp

flavicon
face


Gerhard Fiedler wrote:
> (On a side note... I wonder why the copyright notice of cmd.exe goes
> farther back than the one of command.com. Historically it's probably
> younger. I don't think this is relevant for copyright reasons, though.)
>
> Gerhard
>
>  
MS release it's first version of MS-DOS in 1981 or 1982, I really don't
remember.

1981 IBM released it's first PC (which I bought and used it for almost
10 years). MS kept developping both PC-DOS and it's own MS-DOS for quite
some time until IBM decide to continue the PC-DOS all alone.

In 1985 MS released it's first version of MS-Windows which was based on
an original idea of XEROX Lab in Palo Alto.

Probably they keep the year for historical reasons, who knows?

Cheers

Hans

2007\07\11@184102 by Gerhard Fiedler

picon face
Hans Ruopp wrote:

> Gerhard Fiedler wrote:
>> (On a side note... I wonder why the copyright notice of cmd.exe goes
>> farther back than the one of command.com. Historically it's probably
>> younger. I don't think this is relevant for copyright reasons, though.)
>  
> MS release it's first version of MS-DOS in 1981 or 1982, I really don't
> remember.
>
> 1981 IBM released it's first PC (which I bought and used it for almost
> 10 years). MS kept developping both PC-DOS and it's own MS-DOS for quite
> some time until IBM decide to continue the PC-DOS all alone.
>
> In 1985 MS released it's first version of MS-Windows which was based on
> an original idea of XEROX Lab in Palo Alto.
>
> Probably they keep the year for historical reasons, who knows?

Yes, but the command.com history (per copyright notice) starts in 1990,
while the cmd.exe starts in 1985. That's what I wondered about. I would
have thought that it should be either the same year on both, or the other
way 'round.

Gerhard

2007\07\12@025534 by Hans Ruopp

flavicon
face


Gerhard Fiedler wrote:
{Quote hidden}

I think that they show only the 90's copyright because it was at that
time they have introduced the memory management feature which permitted
apps to use memory above 640K and has made win 3.0 possible. I remember
that when they released the 5.0 version it looked like a completely
rewritten DOS, many new features, etc. Probably that's the reason.

Regards

Hans Ruopp


2007\07\12@065813 by Russell McMahon

face
flavicon
face
I seem to have found my SATA transfer rate problem.

Trap for young (and old) players / obvious in retrospect :-( / My
fault / Microsoft's fault / There must be a better way ...

Drives may be "optimised" for write performance or "safe removal".
Choose one. Safe removal removes caching and delayed write. While the
general concept is obvious I'm not sure exactly what that entails in
practice (although I imagine that gadfly knows) but it may write
sector by sector with no leaving data in the buffer any longer than
essential.

I had set this drive for "safe removal". The result is crippling.

This drive is one of several that I am setting up as a backup set
which may be removed. I can see that formally stopping the drive is
going to be the superior choice :-).

Now even "DOS" copy command gives transfer rates of over megabytes per
second. Over 10 times the rate of what I was getting.



       Russell


FYI:        I was/am using "CMD.COM" which, as noted, gives full
capability to Windows programs invoked within it.


2007\07\12@071548 by Peter Bindels

picon face
On 12/07/07, Russell McMahon <KILLspamapptechKILLspamspamparadise.net.nz> wrote:
> I seem to have found my SATA transfer rate problem.
>
> Trap for young (and old) players / obvious in retrospect :-( / My
> fault / Microsoft's fault / There must be a better way ...
>
> Drives may be "optimised" for write performance or "safe removal".
> Choose one. Safe removal removes caching and delayed write. While the
> general concept is obvious I'm not sure exactly what that entails in
> practice (although I imagine that gadfly knows) but it may write
> sector by sector with no leaving data in the buffer any longer than
> essential.
>
> I had set this drive for "safe removal". The result is crippling.

That's Microsoft not figuring out how to do it right. Either make an
application interface that allows you to inform the driver when you're
done with a transaction (multiple file copy) or make a filesystem that
contains logging / journalling so it doesn't crash and burn if you
unplug it.

Win2k by default assumed you'd indicate when you unplugged a device,
giving you a nagging error when you didn't. The new one always
performs "safe" things that are dog slow, even when you're clearly not
going to unplug it for 2 more hours.

2007\07\12@073851 by Gerhard Fiedler

picon face
Russell McMahon wrote:

> FYI:        I was/am using "CMD.COM" which, as noted, gives full
> capability to Windows programs invoked within it.

That's probably cmd.exe. (Note the difference between .exe and .com -- .com
are not Win32 applications.)

Gerhard

2007\07\12@075242 by Gerhard Fiedler

picon face
Peter Bindels wrote:

>> Drives may be "optimised" for write performance or "safe removal".
>> Choose one. Safe removal removes caching and delayed write. [...]
>>
>> I had set this drive for "safe removal". The result is crippling.
>
> That's Microsoft not figuring out how to do it right. Either make an
> application interface that allows you to inform the driver when you're
> done with a transaction (multiple file copy) or make a filesystem that
> contains logging / journalling so it doesn't crash and burn if you
> unplug it.

I think I don't understand your point. AIUI, the drive caching is supposed
to be transparent; that is, the application uses the file system API in the
same way, independently of any read, read-ahead or write caching.

Write caching implies that the system returns a success to the app on a
write before the data is written from system memory to drive (or drive
memory). Which IMO necessarily implies that there may be data loss if the
drive is removed from the cache before the data is written to the drive --
and the application can't do anything to prevent that, and neither can the
file system (as it is on the drive that hasn't seen the data yet).

IMO this is not about the file system "crash and burn" -- I don't think
NTFS does that if you unplug it. But I'm pretty sure that if you use an app
like xxcopy to move files, use write caching and remove the drive during
the process, that you'll have data loss. A journalling file system on the
drive that is being removed won't be able to prevent that.


> Win2k by default assumed you'd indicate when you unplugged a device,
> giving you a nagging error when you didn't. The new one always performs
> "safe" things that are dog slow, even when you're clearly not going to
> unplug it for 2 more hours.

I think the thing is that either you know what you're doing with the drives
(then you can safely set it to caching mode) or you don't know what you're
doing (then it should be set to the safer setting). Don't think only about
yourself (who knows about write caching), but also about the vast majority
of people who don't -- they may need the safer setting, and they wouldn't
know how to change it to be safe. AFAIK, once you set it, the system
remembers your setting for future uses, and to me, it makes sense to have
the default (only the first time a certain drive is connected) to be the
safer setting.

Gerhard

2007\07\12@082858 by Dario Greggio

face picon face
Gerhard Fiedler wrote:

> Write caching implies that the system returns a success to the app on a [...]

Well, I guess this all (recent updates I mean) makes sense.

Until cache is not written to disk, all operations can be done in
memory: the delayed-write mechanism helps improving this side of the
process.
If it is disabled, writes go through the cache to the disk for evey access.

And... People want "safe" systems, so the cache had to be turned off by
default... or a pen-drive would be removed, and data lost.
Caches are nice, but at risk. Or people would say "Windows is not good" :)

(Of course, I agree that everuthing can be improved...)



--
Ciao, Dario

2007\07\12@083458 by Peter Bindels

picon face
On 12/07/07, Gerhard Fiedler <RemoveMElistsTakeThisOuTspamconnectionbrazil.com> wrote:
> Peter Bindels wrote:
> > That's Microsoft not figuring out how to do it right. Either make an
> > application interface that allows you to inform the driver when you're
> > done with a transaction (multiple file copy) or make a filesystem that
> > contains logging / journalling so it doesn't crash and burn if you
> > unplug it.
>
> I think I don't understand your point. AIUI, the drive caching is supposed
> to be transparent; that is, the application uses the file system API in the
> same way, independently of any read, read-ahead or write caching.

That's true. Transactions would inform the filesystem about the
application's perspective of the likelyhood of a new event occurring
within a few seconds. When you're copying 20000 files, you can be sure
that until all 20000 files are copied that more commands are going to
follow. The file system can then assume it won't be unplugged until
that's done (and will have to cache writes to it when it is unplugged
only to write them out when it's back online - such as a short power
fail).

> Write caching implies that the system returns a success to the app on a
> write before the data is written from system memory to drive (or drive
> memory). Which IMO necessarily implies that there may be data loss if the
> drive is removed from the cache before the data is written to the drive --
> and the application can't do anything to prevent that, and neither can the
> file system (as it is on the drive that hasn't seen the data yet).

They can't prevent losing data because you couldn't write it there yet
- but they can prevent the filesystem itself becoming corrupt because
you unplug it halfway through a copy action. FAT is unusable in that
regard by definition since to place a file you have to update three
locations - two of which need to be done simultaneously. You can store
the data in unused locations (always, since they are still unused),
then you tell the FAT that you're using those sectors + their order
and then you tell the directory where that file is located. The last
two steps aren't atomic so you can end up in a state where the file is
half on the disk (taking space in the FAT) but isn't there (no file
location), which means that subsequent deallocations and allocations
can go awfully wrong in numerous ways, which usually ends up with a
filesystem you cannot rely on in any way.

> IMO this is not about the file system "crash and burn" -- I don't think
> NTFS does that if you unplug it. But I'm pretty sure that if you use an app
> like xxcopy to move files, use write caching and remove the drive during
> the process, that you'll have data loss. A journalling file system on the
> drive that is being removed won't be able to prevent that.

NTFS uses journalling. Journalling means that instead of writing the
two halves mentioned above separately, you enqueue a single write with
"write A to the FAT and B to this location" in a
computer-understandable format. That write is atomic (by definition -
a sector is written or it isn't written, or it is corrupt in which
case you can treat it as unwritten). You then carry out what the
journal entry says (non-atomically) and then you remove the journal
entry. To recover, take the current disk state and apply the journal
entries in order. The final state is the real state; there is no way
it can become corrupt halfway through.

{Quote hidden}

I'm thinking mainly that I really don't want to care about write
caching even though I know what it is. It should use write caching
when it can indicate to me that it is "active" and it should not use
write caching when it appears to be nonactive. If you write a single
file to the filesystem, that should be with journalling and techniques
to prevent filesystem integrity violation; when I perform a mass copy
that should be as fast as possible and it should indicate to me when
it's done (after which I should be able to just yank out the cable
without complaints).

Win2k does it very wrong, only allowing the tedious way. WinXP does it
wrong (not all that wrong) since it allows you to choose. You still
explicitly decide about write caching which is NOT how write caching
should be implemented. If it were a proper feature, it would
accelerate my writes without my personal care. Given that Microsoft
has been messing with write caching for 20 years, they should have
figured this out by now.

Regards,
Peter

2007\07\12@092351 by Russell McMahon

face
flavicon
face
>> FYI:        I was/am using "CMD.COM" which, as noted, gives full
>> capability to Windows programs invoked within it.

> That's probably cmd.exe. (Note the difference between .exe and
> .com -- .com
> are not Win32 applications.)

You are correct.
I noticed that I'd incorrectly put ".COM" (years of writing
'command.com' :-) ) BUT still managed to send it without fixing the
error.


       Russell

2007\07\12@092351 by Russell McMahon

face
flavicon
face
>> I seem to have found my SATA transfer rate problem.

> That's Microsoft not figuring out how to do it right. Either make an

Indeed.
As I said :-)

>>  ... My fault / Microsoft's fault  ...

BUT I expect Micro$oft to set traps for me - its my job to find them
before they bite me :-).
In this case I failed.

       Russell


2007\07\12@131737 by Tomas Larsson

flavicon
face
> -----Original Message-----
> From: spamBeGonepiclist-bouncesspamBeGonespammit.edu
> [TakeThisOuTpiclist-bouncesEraseMEspamspam_OUTmit.edu] On Behalf Of Russell McMahon
> Sent: Thursday, July 12, 2007 3:23 PM
> To: Microcontroller discussion list - Public.
> Subject: Re: [EE]:: SATA transfer rate
>
> >> I seem to have found my SATA transfer rate problem.
>
> > That's Microsoft not figuring out how to do it right. Either make an
>
> Indeed.
> As I said :-)

I think you got it a little bit wrong.
The delayed write / write cache is actually residing on the disk itself, the
OS can't do very much about it.
Probably, you can't turn it of or on at will, just flushing the cache (that
is what the "disconnect" stuff actually do).
I think that the primary function is to release the bus much quicker, i.e
you don't need to wait for the disk/bus to actually write the stuff on the
platters and thenn become ready, also if the same data in a subsequent read,
it's returned from the cache instead of reading from the platters.

IMO neither read or write cache should not have any larger impact on a large
sequential file transfer, max speed should be regardles of any cache
settings, the speed it takes to get the stuff off and on the platters.
The read cache could in some cases actully reduce the transfer, since it
might be possible that the next part of the file(s) is not located on the
next following sector, and then the cache has to be disgarded.
However, most of the times, when reading and writning small parts, it would
speed up thing.


With best regards

Tomas Larsson
Sweden
http://www.tlec.se
http://www.ebaman.com

Verus Amicus Est Tamquam Alter Idem

2007\07\12@211609 by Hector Martin

flavicon
face
Tomas Larsson wrote:
> I think you got it a little bit wrong.
> The delayed write / write cache is actually residing on the disk itself, the
> OS can't do very much about it.

While hard drives do have write caches, so does the OS. In this case
it's the OS's cache we're talking about. I'm also pretty sure many flash
drives have next to no cache, unlike real platter-based hard drives.

Disconnecting a drive will:
1. Flush the OS cache
2. Unmount the filesystem
3. Flush the drive cache

> I think that the primary function is to release the bus much quicker, i.e
> you don't need to wait for the disk/bus to actually write the stuff on the
> platters and thenn become ready, also if the same data in a subsequent read,
> it's returned from the cache instead of reading from the platters.

That's part of the function of the hard drive's cache, but the OS also
maintains a write buffer. This is so the OS can bundle several writes
together onto one larger write which is more appropriate for the drive.
The OS may also do rescheduling and reordering to attempt to minimize
seeks and the like. Especially on filesystems like FAT, these things
help speed a lot.

> IMO neither read or write cache should not have any larger impact on a large
> sequential file transfer, max speed should be regardles of any cache
> settings, the speed it takes to get the stuff off and on the platters.
> The read cache could in some cases actully reduce the transfer, since it
> might be possible that the next part of the file(s) is not located on the
> next following sector, and then the cache has to be disgarded.
> However, most of the times, when reading and writning small parts, it would
> speed up thing.

This depends on the size and state of the individual files. IF the copy
application is smart and writes large sector-aligned blocks (say 64 *
512 bytes per block) AND the filesystem doesn't require metadata updates
for every sector or small group of sectors written AND the files aren't
fragmented AND they're large, then yes, it can get close.

With no caching, when writing a FAT file, you may end up doing many many
 operations. Let's say the application writes 128 bytes at a time, and
the cluster size is 2kb (4 sectors):

- Write directory entry
- Write FAT entry
- Write 512-byte sector 0 with new 128 bytes of data at offset 0
- Write directory entry with new size
- Write 512-byte sector 0 with new 128 bytes of data at offset 128
- Write directory entry with new size
- Write 512-byte sector 0 with new 128 bytes of data at offset 256
- Write directory entry with new size
- Write 512-byte sector 0 with new 128 bytes of data at offset 384
- Write directory entry with new size
- Write new FAT entry
- Write 512-byte sector 0 with new 128 bytes of data at offset 0
- Write directory entry with new size
- Write 512-byte sector 0 with new 128 bytes of data at offset 128
- Write directory entry with new size
- Write 512-byte sector 0 with new 128 bytes of data at offset 256
- Write directory entry with new size
- Write 512-byte sector 0 with new 128 bytes of data at offset 384
- Write directory entry with new size
[...]

Obviously, this is horribly inefficient. Here's the caching version
(assuming everything is cached up for this particular file, and no
fragmentation to simplify things):

- Write directory entry with final size (one sector write)
- Write FAT entries (a few sector writes, since many FAT entries fit in
a sector)
- Write a number of large 64-sector (32kb) bursts to the hard drive.

MUCH faster. It gets even better if the files are small (even if the
copy is of a large number of files, you'd still need to update metadata
for each file with no caching)

Actually, I think FAT does not update file size until you close the
file, which may help (this is probably from the old DOS days, to avoid
all the useless updates). Under UNIX, which has sane file management
semantics, the file size is always up-to-date with the file content.
Thank goodness it's had caching for a long time now (together with good
old mount/umount)

However, I must say that Windows isn't too horrible in its
implementation, letting users decide which of the two modes to use.
Under Linux it works in much the same way (the default is to cache,
which requires umount to work safely, but there's an option to flush
everything to disk immediately). I haven't benchmarked anything, but
both implementations work in roughly the same way. This is one of those
occasions when there is no Right Thing to do, at least not without
supplementing the user API (besides at least trying to autodetect what
option to use based on the kind of drive at hand, though of course this
may end up confusing users that get the wrong behavior selected for
their particular setup).
--
Hector Martin (RemoveMEhectorspamTakeThisOuTmarcansoft.com)
Public Key: http://www.marcansoft.com/marcan.asc

2007\07\13@031312 by Tomas Larsson

flavicon
face

> -----Original Message-----
> From: piclist-bouncesEraseMEspam.....mit.edu
> [EraseMEpiclist-bouncesspammit.edu] On Behalf Of Hector Martin
> Sent: Friday, July 13, 2007 3:16 AM
> To: Microcontroller discussion list - Public.
> Subject: Re: [EE]:: SATA transfer rate
>
> Tomas Larsson wrote:
> > I think you got it a little bit wrong.
> > The delayed write / write cache is actually residing on the disk
> > itself, the OS can't do very much about it.
>
> While hard drives do have write caches, so does the OS. In
> this case it's the OS's cache we're talking about. I'm also
> pretty sure many flash drives have next to no cache, unlike
> real platter-based hard drives.

The cache setting in the device manager is for switching the hard-drive
write-cache on or of, not the OS.
Don't think I ever seen a settings for the OS disk-buffers for others than
flash -drives.
As far as I know, the OS do not maintain any configurable buffers for
hard-drives, only for memory-cards an similar.

With best regards

Tomas Larsson
Sweden
http://www.tlec.se
http://www.ebaman.com

Verus Amicus Est Tamquam Alter Idem >

2007\07\13@045902 by Hector Martin

flavicon
face
Tomas Larsson wrote:
{Quote hidden}

This may be while Windows is so slow then. Linux tends to use up all
available RAM in disk buffers (without locking it, of course - if you
need the RAM it gets freed, it's just a way of doing something
productive with it instead of just letting it sit around)

400MB worth of buffers in RAM on my system, currently.


--
Hector Martin (RemoveMEhectorTakeThisOuTspamspammarcansoft.com)
Public Key: http://www.marcansoft.com/marcan.asc

2007\07\13@054626 by Tomas Larsson

flavicon
face
{Quote hidden}

One thing just crossed my mind, that I probably was a little bit wrong.
In XP-Pro you can select the role of the computer, there is a setting in the
system applet where you can select if the system memory should be optimized
for programs or system cache and another setting if the cpu-sheduling
should be optimized for background or foreground tasks.

These settings is AFAIK not avaible in XP-Home.

With best regards

Tomas Larsson
Sweden
http://www.tlec.se
http://www.ebaman.com

Verus Amicus Est Tamquam Alter Idem

2007\07\13@065953 by Gerhard Fiedler

picon face
Tomas Larsson wrote:

> The cache setting in the device manager is for switching the hard-drive
> write-cache on or of, not the OS.

Yes, and no, at least in WinXP Pro.

The dialog says quite clearly that it controls both the write caching on
the disk /and/ in Windows. They are even separately controllable.

Gerhard

2007\07\13@153047 by Nestor A. Marchesini

flavicon
face
Hector Martin escribió:
> available RAM in disk buffers (without locking it, of course - if you
> need the RAM it gets freed, it's just a way of doing something
> productive with it instead of just letting it sit around)
>
> 400MB worth of buffers in RAM on my system, currently.
> This may be while Windows is so slow then. Linux tends to use up all.
The synonymous one of sure extraction before dismounting is:

$ sync
$ umount /dev/xxxx

Regards

Néstor A. Marchesini
Chajari-Entre Rios-Argentina



2007\07\13@193559 by Hector Martin

flavicon
face
Nestor A. Marchesini wrote:
> The synonymous one of sure extraction before dismounting is:
>
> $ sync
> $ umount /dev/xxxx

I'm pretty sure the sync is redundant. umount runs sync itself and makes
sure everything is flushed to disk before completing.

--
Hector Martin (KILLspamhectorspamBeGonespammarcansoft.com)
Public Key: http://www.marcansoft.com/marcan.asc

2007\07\13@235607 by Nestor A. Marchesini

flavicon
face
Hector Martin escribió:
> Nestor A. Marchesini wrote:
>> The synonymous one of sure extraction before dismounting is:
>>
>> $ sync
>> $ umount /dev/xxxx
>
> I'm pretty sure the sync is redundant. umount runs sync itself and makes
> sure everything is flushed to disk before completing.
>  
I presume that umount executes sync before dismounting, but maybe sync is
useful if one does big movements of files and continues being employed
the PC
with the mounted partition...the problem would come if just gives a cut of
electric power.
Executing sync after copying or move would avoid the loss of files.

Regards
Néstor A. Marchesini
Chajari-Entre Rios-Argentina

2007\07\14@051709 by Hector Martin

flavicon
face
Nestor A. Marchesini wrote:
> I presume that umount executes sync before dismounting, but maybe sync is
> useful if one does big movements of files and continues being employed
> the PC
> with the mounted partition...the problem would come if just gives a cut of
> electric power.
> Executing sync after copying or move would avoid the loss of files.

That is correct.

--
Hector Martin (EraseMEhectorspamEraseMEmarcansoft.com)
Public Key: http://www.marcansoft.com/marcan.asc

More... (looser matching)
- Last day of these posts
- In 2007 , 2008 only
- Today
- New search...