Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the first worker isn't actually idle, then the scheduler will assign the next incoming request to an idle worker. Am I mistaken? If not, what's the problem here?


Load assignment, with this architecture, happens only when a connection is first opened. HTTP keep-alive means there's a disconnect between when it's opened, and when it becomes expensive.

I.e. it's possible for one worker to first serve ten tiny requests (e.g. index.html), then wait while the clients chew on it, then have all ten clients simultaneously request a large asset.


I am not sure it's possible to solve this problem generally. Doing so would require the kernel be able to predict the future.

Also, IIRC this is a proxy. If you run out of CPU copying data between file descriptors before you run out of bandwidth, I'd be very surprised. I think sendfile(2) makes it especially cheap.


Sure there is. Pass around the connection socket as needed, or have a layer in front of the processing layer to only hand out the work when it is good to go.


Can you point to a working example of this that actually solves the problem? i.e., that is demonstrably more efficient for any given request than the FIFO wakeup method?


No, you would need to test it for your workload.

I am saying it is possible, not that it is better. Very different things.


That's why I said "it's not possible to solve this problem generally." That is, there's no general solution to the problem, one that is optimal for all workloads.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: