Tar can transfer more filetypes and attributes than scp can (even using -p option). `scp -p` only transfers mode, mtime and atime; you lose ownership, extended atributes, symlinks, and hardlinks.
You will also get better compression with tar (or rsync), as it is compressing the files directly, and not just the ssh stream (-C is just passed on to ssh).
In particular, scp is mindblowingly slow on lots of small files. I independently rediscovered the tar-pipe trick while sitting there watching scp laboriously copy thousands of 100-bytes so slowly I could count the files as they went by. That should not be possible, even at modem speeds. Fine for moving one file, OK for directories of very large files, not suitable for general usage where you might encounter a significant number of smaller files.
Absolutely. Connection latency hits you the hardest, since each file is sent serially, and requires 2 (or 3 with -p) round trips in the protocol, and this is on top of an ssh tunnel with it's own overhead. I can't remember what my tests showed, but I have this inkling feeling that tar over ssh was far faster than rsync for an initial load, since there's no round trips required, but you lose some of possible rsync benefits, like resume-ability and checksums.
If my first tar attempt fails for some reason, but it made a lot of progress, I switch to rsync. Best of both worlds. This hasn't come up often enough for me to script it.
Add pv (available in many standard repos these days, from http://www.ivarch.com/programs/pv.shtml if not) into the mix and you get a handy progress bar too.
and so forth will result in a throbber as it can't query the pipe for a length.
If you demoggify the first example to:
pv file | nc ...
you get a progress bar on the sending end without manually specifying a size.
Even without a proper % progress bar, the display can be useful as you can at least see the total sent so far (so if you know the approximate final size you can judge completeness in your head) and the current rate (so you can see it is progressing as expected (so you get some indication of a problem such as an unexpectedly slow network connection, other than it taking too long)).
Also, I find that if I'm going to copy the data once, I'm often going to copy it twice, or which to get a more up to date version of it at a later time. Rsync clearly wins in these cases.
Finally, from the compress flag on rsync:
Note that this option typically achieves better compression ratios than can be achieved by using a compressing remote shell or a compressing transport because it takes advantage of the implicit information in the matching data blocks that are not explicitly sent over the connection.