Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the disassembler we round trip. x86 ->(via model) IR ->(via clang) x86. If they are identical then the IR is correct. Could be correct even if not identical, but then you need to check.

For the auto-tuning, we suggest the best passes to use in LLVM. We take some effort to weed out bad passes, but LLVM has bugs. This is in common with any auto-tuner.

We train it to emulate the compiler. The compiler does that better already. We do it because it helps the LLM understand the compiler better and it auto-tunes better as a result.

We hope people will use this model to fine-tune for other heuristics. E.g. an inliner which accepts the IR or the caller and callee to decide profitability. We think things like that will be vastly cheaper for people if they can start from LLM Compiler. Training LLMs from scratch is expensive :-)

IMO, right now, AI should be used to decide profitability not correctness.



Have you guys applied this work internally to optimize Meta's codebase?


> Could be correct even if not identical

Can you be 100% sure that the model-gen-IR is correct if output x86 is identical against the input?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: