> If the input buffer was guaranteed to be large enough in this particular case
It was, since it used the same size to allocate the buffer and call read(2) it would never put more data than expected in the buffer.
> otherwise you can still hit whatever follows your input buffer.
Yes, but that's not the issue in heartbleed. My comment was about heartbleed, not about covering all the ways in which you can fuck up memory access in C.
> I can imagine an implementation that does not allocate the buffer for the 64 kiB worst case but just large enough to contain the actual request.
"just large enough" is impossible, you'll always over-allocate by at least 1 byte, and then to get the actual best precision you have to read the input data a byte at a time, performing a read(2) per byte. That's both slow and less readable.
"just large enough" is impossible, you'll always over-allocate by at least 1 byte, and then to get the actual best precision you have to read the input data a byte at a time, performing a read(2) per byte. That's both slow and less readable.
You can do this. First allocate for and read the fixed length part, then determine the length of variable length part and finally allocate for and read the variable length part. This may of course still return less data then expected and leave you with uninitialized memory. And you may of course receive a larger buffer then you asked for.
Yes, but that's not the issue in heartbleed. My comment was about heartbleed [...]
Of course, I just wanted to say that zeroing memory may not be sufficient in the general case without bound checking because sometimes people have or get the impression that this would be a good and easy fix.
> then determine the length of variable length part
That's the part you can't do, you're reading data from a socket, you can't skip around with fseek(3), you read(2) or you recv(2) and if you don't store your data somewhere you lose it.
Allocate a fixed length buffer for the header and read the header, inspect the length field and allocate a second buffer with this length and finally read the variable length part into this second buffer. Maybe check that there is no trailing data. This is what you would probably do anyway if the variable length part could be way larger then 64 kiB and just blindly allocating for the maximum length is not an option.
Right. Now you've got significantly more code (and even more chances of getting it wrong), more allocations, and you can still fail to correctly handle read(2) not completely filling a buffer and leaking data. Although most likely less than in heartbleed.
How did we end up here? My initial point was that zeroing memory does not prevent leaks in the general case and we both agree on that. I never claimed OpenSSL should or should not have done something differently.
It was, since it used the same size to allocate the buffer and call read(2) it would never put more data than expected in the buffer.
> otherwise you can still hit whatever follows your input buffer.
Yes, but that's not the issue in heartbleed. My comment was about heartbleed, not about covering all the ways in which you can fuck up memory access in C.
> I can imagine an implementation that does not allocate the buffer for the 64 kiB worst case but just large enough to contain the actual request.
"just large enough" is impossible, you'll always over-allocate by at least 1 byte, and then to get the actual best precision you have to read the input data a byte at a time, performing a read(2) per byte. That's both slow and less readable.