Thingy Ma Jig is the blog of Nicholas Thompson and contains any useful tips, sites and general blog-stuff which are considered interesting or handy!
Posted on 03 September 2008 in
linux
How to
geek
I am currently in the process of migrating several dozen sites between two servers. I tried using scp command to copy the sites over however scp is a very slow command when transferring many small files.
I did a little research on how to use tar over an ssh connection and realised that you could specify the stdout on tar.
[adsense:468x60:4496506397]
Using this method effectively sends the compressed tarball to the terminal. You then pipe that into an ssh session which is running the extract version of the previous tar function along with the change directory argument. This, essentially, sends the compressed tarball into a decompression process at the other end over a secure ssh "pipe".
The result is a pretty quick file transfer which - as the data is being sent in a compressed GZIP form (of BZip2 if you replace the z with a j in the tar functions) you save on bandwidth too.
Here an an example of how to do this, assuming you are in (for example) /var/www/html/
and the website you want to transfer is the folder www.example.com
.
tar czf - www.example.com/ | ssh joebloggs@otherserver.com tar xzf - -C ~/
This will send the entire www.example.com folder over to the home folder on your target server in compressed form over and encrypted connection.
This is the latest version, based on some performance tweaks to SSH for older boxes, like low power NAS'. In my case, SSH was maxing out the CPU on the NAS which was limiting my transfer rate.
ssh 192.168.1.2 -T -c arcfour -o Compression=no -x "tar cf - /remote/path" | tar xf - -C .
The Gist has the explanation, but basically we are interested in the following items:
In my case, this meant going from ~0.5Mb/s to around 3Mb/s.