The average case is always as expensive or less than a virtual call. But yes, the JIT most certainly introduces unpredictability, which may be unsuitable for hard realtime applications (hard realtime Java programs employ AOT compilation for those classes that require absolute predictability).
It's a problem for more than just realtime applications. Requiring a JIT means requiring a runtime, and that makes it much harder to do thing like embed Rust libraries in scripting languages or expose a C interface.
JIT has little to do with interoperation. It's quite simple to generate C symbols pointing to stubs. What makes interoperation hard is usually a GC rather than a JIT (it's just that most JITted languages also employ a GC; but if you look at, say, Go, it's just as hard to embed or link against than as and it doesn't employ a JIT at all -- its runtime "just" performs scheduling and GC).
---
BTW, Java can be embedded in scripting languages because those languages run on the platform itself and share the runtime. Because of the JIT -- that optimizes across libraries and languages -- the interoperation is cheaper than with C. So much so, that you get the following story: As part of the work being done at Oracle on Graal, HotSpot's next-gen JIT, they've ported various scripting languages to the new JIT, among them Ruby. They've found[1] that if they interpret/JIT the C code of the native Ruby extensions they get better performance than a "plain" Ruby runtime calling into statically compiled C, because the JIT is able to optimize across the language barrier.