Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The S3 API allows requests to read a byte range of the file (sorry , object). So you could have multiple connections each reading a different byte range. Then the ranges would need to be written to the target local file using a random access pattern.


I know that already... and it is exactly what I tested and confirmed here https://news.ycombinator.com/item?id=44249137

You can spawn multiple connections to S3 to retrieve chunks of a file in parallel, but each of these connections is capped at 80MB/s, and the whole of these connections, while operating on a single file, to a single EC2 instance, is capped at 1.6GB/s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: