> I haven't done any graphics programming in years, but I thought you'd want to keep the number of draw calls down, do you need to cluster these triangles into fewer draw calls?
GPUs draw can draw 10,000's of vertices per draw call, whether they are connected together into logical objects or are "triangle soup" like this. There is some benefit to having triangles connected together so they can "share" a vertex, but not as much as you might think. Since GPUs are massively parallel, it does not matter much where on the screen or where in the buffer your data is.
> Is there any work being done to optimize a volumetric representation of scenes and from that create a set of surfaces with realistic looking shaders or similar?
This is basically where the field was going until nerfs and splats. But then nerfs and splats were such HUGE steps in fidelity, it inspired a ton of new research towards it, and I think rightfully so! Truth is that reality is really messy, so trying to reconstruct logically separated meshes for everything you see is a very hard way to try to recreate reality. Nerfs and splats recreate reality much easier.
Matterport | Senior Software Engineers in Frontend/3D, iOS or Android | US Remote | Full-time | REMOTE VISA
Matterport makes a 3D camera and web platform that allows users to easily capture and display 3D models of physical spaces. Check out some example spaces in our gallery: https://matterport.com/gallery
Some interesting facts about us:
* We host over 4,500,000 highly detailed 3D models of real places captured by our cameras, amounting to over 4.5PB of data.
* We serve over 100 million 3D views every month, which amounts to over 50 billion requests and 2.5PB of data.
We're looking for engineers for our fully remote 3D (WebGL) and mobile (iOS/Android) teams.
* The 3D team makes our 3D viewing and editing applications for the web and VR. Core technologies include Typescript, THREE.js, WebGL and Preact. The team sits at the very end of our 3D pipeline: 3D data gets uploaded from our cameras, processed by our vision pipeline in C++, Python, and TensorFlow, before GraphQL APIs serve the data to the WebGL and VR applications, which is what the end user sees.
* The Mobile team makes our 3D scanning applications, that let our users scan 3D models with your phone, or control a 360 or 3D camera. We write in Swift (iOS) and Kotlin (Android), and often get to do a lot of high performance, graphics-heavy stuff in addition to the more normal UI work.
If one of these teams peaked your interest, send your resume and some words to hakon (at) matterport (dot) com!
Matterport | Senior Software Engineer, Frontend/3D | US Remote | Full-time | REMOTE VISA
Matterport makes a 3D camera and web platform that allows users to easily capture and display 3D models of physical spaces. Check out some example spaces in our gallery: https://matterport.com/gallery
Some interesting facts about us:
* We host over 4,000,000 highly detailed 3D models of real places captured by our cameras, amounting to over 4PB of data.
* We serve over 90 million 3D views every month, which amounts to over 45 billion requests and 2PB of data.
The WebGL team makes our 3D viewing and editing applications for the web and VR. Core technologies include Typescript, THREE.js, WebGL and Preact. The team sits at the very end of our 3D pipeline: 3D data gets uploaded from our cameras, processed by our vision pipeline in C++, Python, and TensorFlow, before REST APIs serve the data to the WebGL and VR applications, which is what the end user sees.
If you have worked with 3D engines before, or if already know "normal" web development well, and want to take the next step into the world of 3D and WebGL, send me a note at hakon (at) matterport (dot) com.
Matterport | Senior Software Engineer, Frontend/3D | Sunnyvale / San Francisco, CA | Full-time | REMOTE VISA
Matterport makes a 3D camera and web platform that allows users to easily capture and display 3D models of physical spaces. Check out some example spaces in our gallery: https://matterport.com/gallery
Some interesting facts about us:
* We host over 4,000,000 highly detailed 3D models of real places captured by our cameras, amounting to over 4PB of data.
* We serve over 90 million 3D views every month, which amounts to over 45 billion requests and 2PB of data.
The WebGL team makes our 3D viewing and editing applications for the web and VR. Core technologies include Typescript, THREE.js, WebGL and Preact. The team sits at the very end of our 3D pipeline: 3D data gets uploaded from our cameras, processed by our vision pipeline in C++, Python, and TensorFlow, before REST APIs serve the data to the WebGL and VR applications, which is what the end user sees.
If you have worked with 3D engines before, or if already know "normal" web development well, and want to take the next step into the world of 3D and WebGL, send me a note at hakon (at) matterport (dot) com.
Matterport | Senior Software Engineer, Frontend/3D | Sunnyvale / San Francisco, CA | Full-time | REMOTE VISA
Matterport makes a 3D camera and web platform that allows users to easily capture and display 3D models of physical spaces. Check out some example spaces in our gallery: https://matterport.com/gallery
Some interesting facts about us:
* We host over 3,000,000 highly detailed 3D models of real places captured by our cameras, amounting to over 3PB of data.
* We serve over 90 million 3D views every month, which amounts to over 45 billion requests and 2PB of data.
The WebGL team makes our 3D viewing and editing applications for the web and VR. Core technologies include Typescript, THREE.js, WebGL and Preact. The team sits at the very end of our 3D pipeline: 3D data gets uploaded from our cameras, processed by our vision pipeline in C++, Python, and TensorFlow, before REST APIs serve the data to the WebGL and VR applications, which is what the end user sees.
If you have worked with 3D engines before, or if already know "normal" web development quite well, and want to take the next step into the world of 3D and WebGL, send me a note at hakon (at) matterport (dot) com.
Matterport | Senior Software Engineer, Frontend/3D | Sunnyvale / San Francisco, CA | Full-time | REMOTE VISA
Matterport makes a 3D camera and web platform that allows users to easily capture and display 3D models of physical spaces. Check out some example spaces in our gallery: https://matterport.com/gallery
Some interesting facts about us:
* We host over 3,000,000 highly detailed 3D models of real places captured by our cameras, amounting to over 3PB of data.
* We serve over 90 million 3D views every month, which amounts to over 45 billion requests and 2PB of data.
The WebGL team makes our 3D viewing and editing applications for the web and VR. Core technologies include Typescript, THREE.js, WebGL and Preact. The team sits at the very end of our 3D pipeline: 3D data gets uploaded from our cameras, processed by our vision pipeline in C++, Python, and TensorFlow, before REST APIs serve the data to the WebGL and VR applications, which is what the end user sees.
If you have worked with 3D engines before, or if already know "normal" web development quite well, and want to take the next step into the world of 3D and WebGL, send me a note at hakon (at) matterport (dot) com.
Matterport | Senior Software Engineer, Frontend/3D | Sunnyvale / San Francisco, CA | Full-time | REMOTE VISA
Matterport makes a 3D camera and web platform that allows users to easily capture and display 3D models of physical spaces. Check out some example spaces in our gallery: https://matterport.com/gallery
Some interesting facts about us:
* We host over 2,000,000 highly detailed 3D models of real places captured by our cameras, amounting to over 2PB of data.
* We serve over 60 million 3D views every month, which amounts to over 30 billion requests and 1PB of data.
The WebGL team makes our 3D viewing and editing applications for the web and VR. Core technologies include Typescript, THREE.js, WebGL and Preact. The team sits at the very end of our 3D pipeline: 3D data gets uploaded from our cameras, processed by our vision pipeline in C++, Python, and TensorFlow, before REST APIs serve the data to the WebGL and VR applications, which is what the end user sees.
If you already know "normal" web development quite well, and want to take the next step into the world of 3D and WebGL, send us a note!
Matterport | Senior Software Engineer, Frontend/3D | Sunnyvale / San Francisco, CA | Full-time | ONSITE VISA
Matterport makes a 3D camera and web platform that allows users to easily capture and display 3D models of physical spaces. Check out some example spaces in our gallery: https://matterport.com/gallery
Some interesting facts about us:
* We host over 1,800,000 highly detailed 3D models of real places captured by our cameras, amounting to over 2PB of data.
* We serve over 60 million 3D views every month, which amounts to over 30 billion requests and 1PB of data.
The WebGL team makes our 3D viewing and editing applications for the web and VR. Core technologies include Typescript, THREE.js, WebGL and Preact. The team sits at the very end of our 3D pipeline: 3D data gets uploaded from our cameras, processed by our vision pipeline in C++, Python, and TensorFlow, before REST APIs serve the data to the WebGL and VR applications, which is what the end user sees.
If you already know "normal" web development quite well, and want to take the next step into the world of 3D and WebGL, send us a note!
Matterport | Staff Software Engineer, 3D | Sunnyvale / San Francisco, CA | Full-time | ONSITE VISA
Matterport makes a 3D camera and web platform that allows users to easily capture and display 3D models of physical spaces. Check out some example spaces in our gallery: https://matterport.com/gallery
Some interesting facts about us:
* We host over 1,800,000 highly detailed 3D models of real places captured by our cameras, amounting to over 2PB of data.
* We serve over 60 million 3D views every month, which amounts to over 30 billion requests and 1PB of data.
The WebGL team makes our 3D viewing and editing applications for the web and VR. Core technologies include Typescript, THREE.js, WebGL and Preact. The team sits at the very end of our 3D pipeline: 3D data gets uploaded from our cameras, processed by our vision pipeline in C++, Python, and TensorFlow, before REST APIs serve the data to the WebGL and VR applications, which is what the end user sees.
If you already know "normal" web development quite well, and want to take the next step into the world of 3D and WebGL, send us a note!
GPUs draw can draw 10,000's of vertices per draw call, whether they are connected together into logical objects or are "triangle soup" like this. There is some benefit to having triangles connected together so they can "share" a vertex, but not as much as you might think. Since GPUs are massively parallel, it does not matter much where on the screen or where in the buffer your data is.
> Is there any work being done to optimize a volumetric representation of scenes and from that create a set of surfaces with realistic looking shaders or similar?
This is basically where the field was going until nerfs and splats. But then nerfs and splats were such HUGE steps in fidelity, it inspired a ton of new research towards it, and I think rightfully so! Truth is that reality is really messy, so trying to reconstruct logically separated meshes for everything you see is a very hard way to try to recreate reality. Nerfs and splats recreate reality much easier.