Searching \ for '[OT] Fastest file transfer over Ethernet' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/index.htm?key=fastest+file+transfer
Search entire site for: 'Fastest file transfer over Ethernet'.

Exact match. Not showing close matches.
PICList Thread
'[OT] Fastest file transfer over Ethernet'
2008\06\18@094903 by Tomás Ó hÉilidhe

picon face

I want to transfer files as quickly as possible over Ethernet.

They say FTP is faster than Samba if you're copying a small amount of
large files, but Samba is faster than FTP if you're copying a large
amount of small files.

I mostly copy a small amount of large files, e.g. files that are between
500 MB and 2 GB, so I'm thinking of going with FTP. So far though I've
been using Samba.

Should I consider any other methods of copying over Ethernet? FTP and
Samba both sit on top of TCP, which I really don't need because I'll be
copying across a cross-over cable, and packet loss on a LAN is something
like one packet per month, so I'd prefer if I could get some sort of
acknowledgement-less system, perhaps a protocol that runs on top of UDP.
I'd prefer to use an MD5 checksum to confirm that the copy was error-free.

The other day, using Samba, it took me 15 minutes to copy 8.19 gigabytes
across a 100 Mbps Ethernet connection.

8.19 gigabytes = 70 412 301 552 bits

15 minutes    =    70 412 301 552 bits

1 minute = 4 694 153 436 bits

1 second = 78 235 890 bits

Transfer speed = 78.2 Mbps

That actually doesn't sound too bad at all for a 100 Mbps connection! Do
you reckon I can do better?

My laptop has a gigabit NIC but unfortunately the donor computer only
has 100 Mbps.

2008\06\18@101656 by Tomás Ó hÉilidhe

picon face

I copied a 696 MB file just there.

Samba took 1 minute 21 seconds.
FTP took 1 minute 14 seconds.

Samba = 81 seconds per 5 846 335 488 bits = 1 second per 72 176 981 bits
= 72.2 Mbps

FTP = 74 seconds per 5 846 335 488 bits = 1 second per 79 004 533 bits =
79 Mbps

2008\06\18@104651 by Tamas Rudnai

face picon face
Try TFTP, that uses UDP instead - with no security measurements nor error
checking, but fingers crossed...

http://en.wikipedia.org/wiki/Trivial_File_Transfer_Protocol

Tamas


On Wed, Jun 18, 2008 at 2:48 PM, Tomás Ó hÉilidhe <spam_OUTtoeTakeThisOuTspamlavabit.com> wrote:

{Quote hidden}

>

2008\06\18@104720 by Herbert Graf

flavicon
face

On Wed, 2008-06-18 at 14:48 +0100, Tomás Ó hÉilidhe wrote:
> The other day, using Samba, it took me 15 minutes to copy 8.19 gigabytes
> across a 100 Mbps Ethernet connection.
>
> 8.19 gigabytes = 70 412 301 552 bits
>
> 15 minutes    =    70 412 301 552 bits
>
> 1 minute = 4 694 153 436 bits
>
> 1 second = 78 235 890 bits
>
> Transfer speed = 78.2 Mbps
>
> That actually doesn't sound too bad at all for a 100 Mbps connection! Do
> you reckon I can do better?

Perhaps a little, but not much better. When my network was 100Mbps I
would barely reach the 90s, and that was when transferring one large
file. Any time you transfer multiple files you will loose speed, some
goes to the transfer protocol, some to the hard drive having to seek.

> My laptop has a gigabit NIC but unfortunately the donor computer only
> has 100 Mbps.

Put a gigabit card into the computer. They cost about $15 these days.
With that, you'll likely see speeds of around 200-300Mbps, the limit
then being the laptop's drive.

TTYL

2008\06\18@104727 by Apptech

face
flavicon
face
> The other day, using Samba, it took me 15 minutes to copy
> 8.19 gigabytes
> across a 100 Mbps Ethernet connection.

...

> Transfer speed = 78.2 Mbps
>
> That actually doesn't sound too bad at all for a 100 Mbps
> connection! Do
> you reckon I can do better?

I reckon you should bottle it and sell it !!! :-)



       Russell

2008\06\18@104805 by Jake Anderson

flavicon
face
Tomás Ó hÉilidhe wrote:
> I copied a 696 MB file just there.
>
> Samba took 1 minute 21 seconds.
> FTP took 1 minute 14 seconds.
>
> Samba = 81 seconds per 5 846 335 488 bits = 1 second per 72 176 981 bits
> = 72.2 Mbps
>
> FTP = 74 seconds per 5 846 335 488 bits = 1 second per 79 004 533 bits =
> 79 Mbps
>
>  
If your really keen netcat tar pipe
http://compsoc.dur.ac.uk/~djw/tarpipe.html

for bonus points use UDP for the transport rather than TCP
It only really matters if your CPU bound

FTP and HTTP are probably pretty equivelent (same with a tar pipe)
With regards benchmarking beware of caching.

It looks like your only transfering at 8megabytes per second? get gig-E
or one of those USB/firewire things people were talking about. That
should bump you up to around 30mbytes/sec

2008\06\18@105051 by Massimo Gaggero

flavicon
face
Tomás Ó hÉilidhe ha scritto:
{Quote hidden}

Have you measured the raw read/write speed of your hard drives?

Massimo.

--
  ____  ____  ____  _  _
 / ___)| __ \/ ___)/ /| | Dott. Massimo Gaggero
| (___ |    /\___ \\__  | Expert Software Engineer
 \____)|_|\_\(____/   |_| Advanced Computing and Communications -
Distributed Computing
E-mail: .....maxKILLspamspam@spam@crs4.it       Phone: +39 070 9250 329

2008\06\18@125934 by Rolf

face picon face
Jake Anderson wrote:
{Quote hidden}

Hmmm.... the TCP/UDP thing is more complex than you suggest.... it
matters always, not just when CPUbound..

with TCP there is an ACK for every packet. In essence, your network
latency becomes the bottleneck. with a 1500 byte packet, and a 0.5ms
latency, then, your packets transfer in (8*1500)/100,000,000 of a
second, plus 0.5ms = 0.12ms + 0.5ms
then, there needs to be an ACK for each packet, another small packet
sent, with 0.5ms latency.... all boils down to the latency being 80% of
the time problem..... not the actual data...

In a more complicated way, I tested some networks.
Between two gigabit connected linux machines with dedicated switch, I
got latency of 0.1ms
Between one gigabit connected linux host and an XP machine on 100MBit I
got 0.4ms
Between one gigabit connected linux host and http://www.google.com I got 12ms

The next parameter is the Receive window (how many bytes will be sent
before receiving an ACK packet). With a receive window of 3000 bytes and
packet size of 1500 bytes, only two packets can be sent before the
sender sits and waits for an ACK....

With a 'default' packet size of 1500bytes, and a receive window of 18000
bytes (default for WinXP - Linux is default of 65535 bytes) on a 100MBit
network assuming no packet errors.... and a 24byte IPv4 header and a
24byte TCP header

12 packets get sent (1524 * 12 = 18000 bytes, ) in 0.12 seconds per
packet, or, 1.44ms.
but, for each 1524 bytes, 48 of them at least are the TCP/IP headers, so
only 1476 bytes of real data get sent.
The sender (WinXP then waits for an ACK) which, with a round-trip
latency of 0.4ms will take: 0.2ms for first packet to be received by
receiver. 0.2ms to send the ACK, which is 24 bytes and takes 0.003ms to
transmit.
The sender gets the first ACK and then sends the next packet.

Because the latency of the connection of 0.4ms is quicker than the time
taken to fill the receive window (time to send 12 packets), you will not
end up with a latency bottleneck, but, look at what happens when the
latency creeps up, and the transmit speeds go up too.....

with 1500 byte TCP packets, on gigabit, with 1ms latency, and 18000 byte
receive window....
12 packets get sent in 0.144ms
the transmitter waits for the ack, which comes after an additional
0.86ms during which time the transmitter is idle.
The transmitter then sends another packet, which is shortly followed by
another ack (acking the second transmitted packet).
The 12 packet buffer is rapidly ack'd (in 0.144ms), and as a result, the
31 through 24th packets are rapidly sent.
But, the 13'th packet is still winging it's way over to the receiver so
the sender goes idle for 0.86ms waiting for the ack on packet 13.
this cycle repeats in such a way that the sender is only really sending
data for 0.144 ms and then idling for 0.86ms.

With a network between two endpoints, the effective bandwidth is:
(("packet-size" - "header size") / ("packet-size")) * "bit rate"

where the "bit rate" is the possible 'transmission speed'.

If the latency time is shorter than the time taken to transmit a
'receive window' worth of data, then the bit rate is the physical bit
rate, for example 1MBit or 1GBit, giving an effective bit rate of
1476/1524 * 100Mbit = 97Mbit for 100MBit connection with packet size of
1500 bytes, or 11.5MB per second.

If the latency time is longer than the time taken to transmit a 'receive
window' worth of data, then the bit rate is more complex, and approximately:

(receive window * 8) / latency

With a receive window of 18000 bytes, and 1ms latency, the network bit
rate is 144MBit. Plugging that back in to the first equation, you get
(1476/1524) * 144MBit = 128MBit, or 16.6MB per second.

Thus, on a gigabit network, with 1ms latency, and default 18000 receive
window, you can transfer at max 16.6MB/s.

With 0.2ms latency, and default 18000 receive window, the max transfer
rate is 88MB/s

With 0.1ms latency, the physical layer is the bottleneck, and the max
transfer rate is 107MB/s

Since latency is largely related to the speed of light (fibre or copper
cabling propogation speed), and buffering in network routers and
bridges, it makes sense that latency is proportional to the length of
cables between two points, as well as the number of routers between
them. Hence the different latencies you see around the world. There is
about 7us latency added for each kilometer of 'wire distance' in a
network. Or 28ms from NY to LA, and I measure 200ms between Toronto and
England, as well as 400ms to South Africa.

The point is, that with UDP there is no ACK process, and thus latency
does not play a part in the bandwidth calculation. Thus, transferring
data via UDP means that you are effectively only limited by the hardware
bandwidth. In a network with latency (multiple routers, long cable
lengths), UDP can enormously impact your transfer speed.

Another point, by modifying your maximum packet size (from 1500bytes) ,
and nby modifying your Receive Window (RWIN), you can significantly
impact your TCP performance too.

Rolf


2008\06\18@155443 by William \Chops\ Westfield

face picon face

On Jun 18, 2008, at 9:59 AM, Rolf wrote:

> The point is, that with UDP there is no ACK process, and thus latency
> does not play a part in the bandwidth calculation.

Wrong, usually.  Without some sort of ACK process, you tend to lose  
reliability.  With TFTP (which someone mentioned, and which runs over  
UDP) there is an ACK for every packet, and NO WINDOWING, so TFTP is  
usually MUCH slower than (tcp based) FTP, which only needs an ACK per  
window.  I'm not too familiar with other UDP based file transfer  
protocols (NFS, ?), but common sense says they all have some sort of  
ACK strategy, or a fast transmitter would never be able to send to a  
slow receiver.

You're correct that TCP transfers are frequently limited by latency  
to a bandwidth of (windowsize-in-bits / latency), although some of  
your details were misleading.  (for instance, it's relatively  
uncommon for a TCP to ACK *every* packet.)  The biggest thing you can  
do to improve speed may be to figure out how to tune your window  
size.  TCP supports window sizes up to 64k based on the original  
protocol, and extensions for "window scaling" support much larger  
window sizes via "window scaling."  I'm pretty sure that there is a  
windows tool that allows you to increase the default window size...

For multiple small file transfers, there are some algorithms in TCP  
that are supposed to ensure fair use of shared bandwidth ("slow  
start") that will slow down transfers a bit.

BillW

2008\06\18@210831 by Jake Anderson

flavicon
face
William Chops Westfield wrote:
{Quote hidden}

You can also look at jumbo frames if both end points support it.
Some intel based NIC's can transfer ~16KB in a packet.

Really your not going to see much performance gain on a 100mbit connection.
When you get to Gig-E then all this stuff starts to matter more.

More... (looser matching)
- Last day of these posts
- In 2008 , 2009 only
- Today
- New search...