> On a remote network I run ethtool on both cards and I got both 1000Mb/s > speed > > As the 20 odd MB/s you're getting is above what is possible on 100M ethernet, you can rule out any ethernet interfaces at 100M. Can you describe the network between the two systems with the slow transfer? If there is a fast WAN from one side of the globe to the other it could be latency related. OpenSSH used to have a fixed internal window size that made it slow on high bandwidth high latency links, and I notice the hpn USE flag still exists in the openssh ebuild, which implies the issue with openssh still exists. Rsync can use either ssh or its own protocol, so if there's a high latency link between the two boxes and rsync is using ssh, you could investigate rebuilding openssh with +hpn. What does ping show the latency as? Otherwise i'd be thinking about packet loss. First place to start for that is on the endpoint interfaces; ifconfig enp35s0f0 | grep err RX errors 0 dropped 0 overruns 0 frame 0 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0