Just worth noting
I had many arguments with other sysadmins and bosses over the stability and validity of sshfs as a shortcut to creating a bridge for transferring files for backup or adhoc moves
The following is the result of an rsync from one sshfs mount to another (from ESXi servers I only had ssh access to ) over a slow link surpassed my expectations, by far:
sent 1342341123955 bytes received 91 bytes 3495447.89 bytes/sec
total size is 1342177283599 speedup is 1.00
Yes 1.3TB VM image taken with GhettoVCB moved over the arch of about 4.4 days with no loss or corruption.
Amazing…
Just worth saying in Linux where there is a will there is a way. In Windows the same move was dying out and never recovering, we tried simple file copy using the vmware browser download, veem ( gave up the move for some wierd licensing reason) , robocopy you name it.
Solution :
mount both servers to a vm over ssh using sshfs and rsync across. The process was slow for the following reasons:
- The lack of compression.
- Encryption on both tunnels across ( ssh )
- VMware expanding the image as it is pulled from the VMFS (as it does)
- Network badwidth is slow and shared with literally hundreds of machines
- Some other reasons like disk latency ( no cache on the controller) etc..
I am just saying sometime even relatively inelegant solutions can give surprising results in Linux environments.
Ode to stability and the fortune of having such amazing free tools to work with.
🙂