Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been messing with S3 for a new project involving the HTML5 canvas- So lots of CORS and canvas security concerns, PUTing objects from the browser, and desire for low-latency changes.

S3 has not been delivering. Here's a few reasons:

* S3 only provides read-after-write consistency for non-standard regions: http://aws.amazon.com/s3/faqs/#What_data_consistency_model_d... Since moving to US-West-1, we've had noticeably more latency. Working without read-after-write just isn't an option, users get old data for the first few seconds after data is pushed.

* CORS support is basically broken. S3 doesn't return the proper headers for browsers to understand how objects should be cached: https://forums.aws.amazon.com/thread.jspa?threadID=112772

* Oh, and the editor for CORS data introduces newlines into your config around the AllowedHost that BREAK the configuration. So you need to manually delete them when you make a change. Don't forget!

* 304 responses strip out cache headers: https://forums.aws.amazon.com/thread.jspa?threadID=104930... Not breaking spec right now, but quite non-standard.

* I swear, I get 403s and other errors at a higher rate than I have from any custom store in the past. But this is purely subjective.

Based on all this- I really need to agree with saurik that the folks at S3 aren't taking their role as an HTTP API seriously enough. They built an API on HTTP, but not an API that browsers can successfully work with. Things are broken in very tricky ways, and I'd caution anybody working with S3 on the front-end of their application to consider the alternatives.

I'm moving some things to Google Cloud Storage right now, and it is blazing fast, supports CORS properly, and has read-after-write consistency for the whole service. Rackspace is going to get back to me, but I expect they could do the same (and they have real support).



Regarding this bug:

> CORS support is basically broken. S3 doesn't return the proper headers for browsers to understand how objects should be cached:

The S3 team is working to address this. We're investigating the other issues and always appreciate your feedback.


While you are fixing things, can you please make cloudfront send HTTP 1.1 instead of HTTP 1.0 for 206 (Partial Content) responses to range get requests. It is invalid since 206 is not part of HTTP 1.0, and Chrome refuses to cache the responses, which makes cloudfront terrible for delivering HTML5 media.


Thanks Jeff!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: