OSL shaders are specialized dynamically at runtime and then JIT'd using LLVM. That's about as far from a static compiler framework like GCC or Clang as you can get, from my perspective.
In addition, fast JIT speed was a very high priority/need and the OSL team spent a few months carefully tuning the LLVM passes to give fast code at a low JIT cost. (All of this was done and discussed publicly; feel free to grep the mailing list.)
Would you mind explaining how OSL is as close to something like Clang or GCC as you can get? I just don't see it.
JIT compilers have to make a tradeoff between time spent on analysis and time spent on execution; this is what makes them hard. It makes them hard because it means they need different levels of optimization, depending on how often a piece of code is going to get executed. If the code is only going to get run once or twice, it usually doesn't make sense to translate it from an efficient interpreter encoding (which LLVM does not have); if it's going to run a few hundred times, it can get a little bit more analysis, while if it's going to be the core of a loop, it makes a lot of sense to run analysis that may take multiple milliseconds.
Using a static list of analyses that are always run before execution would be a static compiler approach, not a JIT approach. A JIT will generally profile the code, optimize it when it gets hot, and deoptimize it when assumptions made during optimization no longer hold (e.g. virtual calls in JVM being optimized to static calls because only one definition of a virtual method exists, but subsequently a new class is loaded that overrides that method). All this dynamic runtime modification of the code (not just initial compilation, but modification of existing, executing code) is where JIT technology is distinct from static compiler technology.
> Would you mind explaining how OSL is as close to something like Clang or GCC as you can get? I just don't see it.
It seems that Open Shading Language is not really Just-in-Time (JIT) compiled. It's compiled from LLVM IR to native machine code Ahead-of-Time (AOT) but at runtime and then executed. This is actually not very far from a static compiler, the only difference is that the target architecture is known at runtime and the LLVM IR is compiled to machine code using that information.
What is the difference of JIT and AOT here is that in a JIT situation, there is some form of an interpreter that is executing byte code of some kind (probably not LLVM IR). When this interpreter reaches a loop of some kind, it will attempt to compile it (from bytecode to LLVM IR and finally from LLVM IR to native code). The interpreter then calls the compiled code, which will run for as long as possible and finally return control to the interpreter which will continue interpreting until it finds another opporturnity for JIT'ing. A JIT compiler is typically employed when the original source language cannot be compiled statically to machine code, because of e.g. dynamic typing.
So from what I can tell, OSL is just a static ahead of time compiler, with the final stage of compilation taking place at runtime.
Please tell me if some of my background facts were incorrect (in particular about OSL).
I think this is correct. OSL is using LLVM in an AOT fashion, though a lot of runtime code generation and specialization is being done just prior to running the AOT LLVM JIT.
As opposed to something like, say, Google v8, which is using runtime feedback to make hot code paths fast (and to remove dynamism when it can be shown to be safe).
I guess I wasn't aware that people weren't including dynamic code generation and AOT compilation in the "JIT" category. To me JIT meant generating machine code "at runtime", and both OSL and v8 would be at opposite ends of the JIT "at runtime" code generation/compilation spectrum -- OSL on the AOT side and v8 on the keep-running-the-compiler side.
In src/liboslexec/llvm_instance.cpp, OSL uses a custom list of 13 LLVM optimizer passes (plus implicit passes pulled in as dependencies) -- less than what Clang uses at -O2 for C, but still a fair amount. Then it runs the full LLVM "-O2" backend. Even if the compilation happens "dynamically" from the perspective of the application, OSL is using many compiler features more commonly associated with "static" compilation.
JIT speed is relative. Shader programs are often executed many many many times, so OSL can likely afford to make different tradeoffs than many other JIT-using applications.
In addition, fast JIT speed was a very high priority/need and the OSL team spent a few months carefully tuning the LLVM passes to give fast code at a low JIT cost. (All of this was done and discussed publicly; feel free to grep the mailing list.)
Would you mind explaining how OSL is as close to something like Clang or GCC as you can get? I just don't see it.