AFAIK jai only uses LLVM in release mode since it does slow down the build. This makes sense since if you have optimizations off, you only have to do codegen. If you are using LLVM for codegen, then you have to codegen twice! Once for LLVM IR and once for machine code.
Even when not using LLVM, Jai still does "codegen twice." Most compilers today (even JITs) have at least one IR in between the syntax tree and the generated code.
The existence of LLVM IR is not the problem- rather, it's how it gets used (on both sides of the API). Generating a lot of naive IR and letting the optimizer clean it up, for example, has a large cost.
And, while I'm not too up to date on the details of Jai, the last thing I heard it was still very fast to compile even in release builds that did use LLVM. That is, the Jai compiler is smarter about how it generates IR.
This is basically just saying that LLVM has an IR. While going straight from AST to machine code may sound like a great thing for compilation speed, it makes compiler maintenance really tough. Many AOT-based compilers nowadays are converging on four levels of IR, which seems to be a sweet spot. (Swift has AST, SIL, LLVM IR, MachineInstr; Rust has AST, MIR, LLVM IR, MachineInstr; GCC has AST, GENERIC, GIMPLE, RTL.)