Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've visited the Stanford Virtual Reality Lab and tried their $50k HMD and high-precision trackers. I agree with John that latency is currently the #1 problem for VR / AR.

Interestingly, the #2 problem is something you don't realize is a problem until you try: "vergence-accommodation conflicts", meaning that the display fails to render optical depth. The solution is "Fixed-Viewpoint Volumetric 3D": http://quora.com/Volumetric-3D , which is what I'm creating as my academic career and my startup Vergence Labs.



Thanks for the info. You will definitely be interested in this part of John's interview, where he discusses depth and how he approached the problem. It starts at around 9:20 and ends at around 11:25:

https://www.youtube.com/watch?v=NYa8kirsUfg#t=9m20s


Oh wait John does talk about focus! https://www.youtube.com/watch?v=NYa8kirsUfg#t=7m32s

He seems to have not heard of the latest Volumetric 3D display technology: "time-multiplexing": http://bankslab.berkeley.edu/projects/projectlinks/fastswitc... (the bottommost diagram, labeled "Switchable lens volumetric display.", made practical in 2008. The diagram labeled "Illustration of 3 mirrors display" is a bulkier 2004 technology.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: