Apple requires bitcode submissions for tvOS and watchOS, this has effectively meant that you needed Apple’s toolchain to build and distribute binaries for those platforms.
Bitcode is an intermediate format produced by LLVM, which allows for optimization passes to be applied to it across different languages. For example, C, C++, Swift, ObjC, and Rust can all benefit from some of the same bitcode optimization passes.
In theory Apple wanted this so they could improve the performance of applications when distributed on some of their platforms, though the benefits and potential of this has been a bit dubious. It did mean that Rust (until this) couldn’t easily target those Apple products.
I was under the impression that it wasn't for performance but for distribution, i.e., to optimize for app size. Instead of having a fat binary crosscompiled for different architectures, Apple uses the bitcode to compile and distribute per architecture. It's part of the "app thinning" process.
To expand on this, it also lets them target new CPU architectures without having developers recompile code for that architecture. This lets them make incremental improvements (eg armv7), but would also let them use their own CPU architecture should they move away from Intel/ARM.
As an example here’s Chris Lattner tweeting about that use case:
I think you’re right that that is the primary reason today. In their marketing material I think they always use the term “reoptimize”, of course, that can mean for size.
App Thinning works just as well without bitcode. It’s just a process of removing unneeded architecture slices and resource variants from a downloadable app.
This is highly dependent on not much changing. For example, even widening integers from 32-64bit but would break a lot of software.
So to a degree this might be true. But my guess is there is very little benefit here. It’s not clear to me if this could even optimize for the presence of a new vector unit, for example.
> This is highly dependent on not much changing. For example, even widening integers from 32-64bit but would break a lot of software.
As Apple designs their own silicon and calling convention, they were able to pull this off for the S3 → S4 transition (the latter of which is AArch64 running with ILP32).
You would need at least a bridging layer (of Objective-C) to so that your app code could use the system frameworks and respond to events (user input, network, sensors, etc).
Objective-C shares calling conventions and struct layout with C, so theoretically you could directly call the system frameworks from Rust via extern "C" functions.
Unfortunately, you have to use C or ObjC (or at least the ObjC runtime) as the middleman.
Now that there is a stable Swift ABI, at least it's theoretically possible to add a first-class Swift interoperability support to Rust, but it's a quite a chunk of work that nobody has attempted yet.
Currently the practical approach is to write the UI in Swift and Rust on the back-end, e.g. gif.ski app does it this way: https://github.com/sindresorhus/Gifski
does apple still require that you submit source code as objective C with your app? i recall google creating a transpiler from java to objective C some years ago so they could use their existing java codebases and still meet apple's requirement for submitting objective C source code.