Apple requires bitcode submissions for tvOS and watchOS, this has effectively meant that you needed Apple’s toolchain to build and distribute binaries for those platforms.
Bitcode is an intermediate format produced by LLVM, which allows for optimization passes to be applied to it across different languages. For example, C, C++, Swift, ObjC, and Rust can all benefit from some of the same bitcode optimization passes.
In theory Apple wanted this so they could improve the performance of applications when distributed on some of their platforms, though the benefits and potential of this has been a bit dubious. It did mean that Rust (until this) couldn’t easily target those Apple products.
I was under the impression that it wasn't for performance but for distribution, i.e., to optimize for app size. Instead of having a fat binary crosscompiled for different architectures, Apple uses the bitcode to compile and distribute per architecture. It's part of the "app thinning" process.
To expand on this, it also lets them target new CPU architectures without having developers recompile code for that architecture. This lets them make incremental improvements (eg armv7), but would also let them use their own CPU architecture should they move away from Intel/ARM.
As an example here’s Chris Lattner tweeting about that use case:
I think you’re right that that is the primary reason today. In their marketing material I think they always use the term “reoptimize”, of course, that can mean for size.
App Thinning works just as well without bitcode. It’s just a process of removing unneeded architecture slices and resource variants from a downloadable app.
This is highly dependent on not much changing. For example, even widening integers from 32-64bit but would break a lot of software.
So to a degree this might be true. But my guess is there is very little benefit here. It’s not clear to me if this could even optimize for the presence of a new vector unit, for example.
> This is highly dependent on not much changing. For example, even widening integers from 32-64bit but would break a lot of software.
As Apple designs their own silicon and calling convention, they were able to pull this off for the S3 → S4 transition (the latter of which is AArch64 running with ILP32).
Bitcode is an intermediate format produced by LLVM, which allows for optimization passes to be applied to it across different languages. For example, C, C++, Swift, ObjC, and Rust can all benefit from some of the same bitcode optimization passes.
In theory Apple wanted this so they could improve the performance of applications when distributed on some of their platforms, though the benefits and potential of this has been a bit dubious. It did mean that Rust (until this) couldn’t easily target those Apple products.