I have two of the MBWE II and four of the MBL, all 2TB versions.
All six drives are plugged into the same Cisco 3550-12T switch. The switch is a gigabit switch and each port is capable of line-rate speeds, as each has its own ASIC.
All six drives are connected to a FreeBSD server that is mounting the drives either via NFS or SMBFS. The FreeBSD server has an Intel gigabit card in it and is connected to a different Cisco 3550-12T. The two Cisco 3550-12Ts are connected via gigabit fiber.
I am using rsync to manage my backups across those NFS/SMBFS mounts. I am not using compression with rsync, although, it doesn’t seem to matter if it is on or off, in terms of achieved throughput.
On the MBWE II versions, I can SSH to the devices directly and copy between the two using rsync.
Either direct, drive to drive, or via NFS/SMBFS, I cannot get more than 60 Mbps out of the drives.
The smallest MTU on the network is 1546. I have not changed the MTU on the drive interface.
All switchports were left with auto negotiation on.
Port Name Status Vlan Duplex Speed Type
Gi0/1 NETDISK1 connected 254 a-full a-1000 10/100/1000BaseTX
Gi0/2 NETDISK2 connected 254 a-full a-1000 10/100/1000BaseTX
Gi0/3 NETDISK3 connected 254 a-full a-1000 10/100/1000BaseTX
Gi0/4 NETDISK4 connected 254 a-full a-1000 10/100/1000BaseTX
Gi0/5 NETDISK5 connected 254 a-full a-1000 10/100/1000BaseTX
There are no ports errors either:
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize
Gi0/1 0 0 0 0 0
Gi0/2 0 0 0 0 0
Gi0/3 0 0 0 0 0
Gi0/4 0 0 0 0 0
Gi0/5 0 0 0 0 0
Here is a network diagram: http://kadux.com/~chris/phillipslan.png
Here’s a port graph: http://kadux.com/~chris/graph.png
I am 99.999% sure that the network is not the issue. There’s two reasons for this. First, every drive exhibits the exact same behavior. Second, I’m a network engineer by trade and have been doing this for nearly 20 years.
If anyone know how to get better performance out of these drives, I would love to hear it.
Thanks in advance.