the answer I've seen when it has been brought up before is that (for allocators) there is not a practical impact on performance -- allocating takes way more time than the virtual dispatch does, so it ends up being negligible. for code bloat, I'm not sure what you mean exactly; the allocator interface is implemented via a VTable, and the impact on binary size is pretty minimal. you're also not really creating more than a couple of allocators in an application (typically a general purpose allocator, and maybe an arena allocator that wraps it in specific scenarios).
for IO, which is new and I have not actually used yet, here are some relevant paragraphs:
The new Io interface is non-generic and uses a vtable for dispatching function calls to a concrete implementation. This has the upside of reducing code bloat, but virtual calls do have a performance penalty at runtime. In release builds the optimizer can de-virtualize function calls but it’s not guaranteed.
...
A side effect of proposal #23367, which is needed for determining upper bound stack size, is guaranteed de-virtualization when there is only one Io implementation being used (also in debug builds!).
He's talking about passing the pointers to the allocators and Io objects as parameters throughout the program, not how allocator vtables for calling the allocator's virtual functions are implemented. But context pointers are a requirement in any program. Consider that a context pointer (`this`) is passed to every single method call ... it's no more "code bloat" than having to save and restore registers on every call.
for IO, which is new and I have not actually used yet, here are some relevant paragraphs:
https://kristoff.it/blog/zig-new-async-io/https://github.com/ziglang/zig/issues/23367