The S3 API allows requests to read a byte range of the file (sorry , object). So you could have multiple connections each reading a different byte range. Then the ranges would need to be written to the target local file using a random access pattern.
You can spawn multiple connections to S3 to retrieve chunks of a file in parallel, but each of these connections is capped at 80MB/s, and the whole of these connections, while operating on a single file, to a single EC2 instance, is capped at 1.6GB/s.