Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm curious how they're going to handle the GPU thing since there really isn't a unified GPU architecture, yet (by 2015, who knows but still). Have vendor specific translations or try to rally around something like OpenCL? I'd love it if GPU coding would become more available and vendor agnostic, so I'm really curious about this.

I know the article didn't mention it, I also wonder what the memory bloat will look like when they turn all primitives into Objects and introduce object overhead just to track int i in a for loop.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: