Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

so it wouldn't be easy because these scans are highly detailed and so would require too many polygons to be loaded at once

would this remain true for modern higher end graphics cards?



Even modern high end graphics cards use abstractions of the base data to create vast amounts of the final output's fine detail. For example tessellation and other techniques used for complex geometry like compound curves, which allow millions or billions of polygons can be visually simulated without needing to be present as polygon data, increasing opportunity for processing parallelization, while reducing load on communication busses and VRAM.

As an example, you could probably represent something like the grip of this FLIR camera in a couple hundred polygons and surface/curve definitions to help the rendering engine tesselate correctly. On the other hand, this overall scan is 357000 vertexes. Sure you can simplify it and bake a bunch of the texture into a normal map, but that then requires manually reworking the texture map and various other postprocessing steps to avoid creating a glitchy mess.

https://i.imgur.com/aAwoiXU.png


> it wouldn't be easy because these scans are highly detailed and so would require too many polygons to be loaded at once

In practice a a 3d artist could very easily create low poly models for these objects. For that low poly replica the high poly model can serve as a useful reference. (But to be honest many artist can just look at images of the object and do the same.)

This is not even hard, on the order of minutes (for something like the Rosetta Stone) or days (for something seriously detailed).

In this case where there is a will, there is a way. In fact this "reduction" step very often part of the game creation pipeline already. Monsters/characters/objects very often get sculpted at a higher resolution and then those high resolution meshes are reduced down to something more manageable (while they bake the details into a bump map texture, or similar).


Maybe I'm buying into the marketing too much, but it's my understanding that Unreal engine 5 can do this automatically.


Not too much, it does actually work :) The concept is generally called "virtualized geometry" and Unreal's implementation is called "Nanite" but others are starting to pop up too, like the virtualized geometry implementation in Bevy.


> but you have to compress the scan

A bit simplified but yeah. In the industry I think it's commonly referred to as "cleaning up the topology" or "simplifying the topology" where "topology" is the structure of the mesh essentially. You'd put the scan/model through something like this: https://sketchfab.com/blogs/community/retopologise-3d-scans-...

> is this true with top spec machines too?

Games frequently feature 100s (sometimes 1000s) of models at the same time, so the optimization of each model is important. Take a look at the launch of Cities Skylines 2 for an example of a game that launched without properly optimized 3D models, the performance was absolutely abysmal because the human/resident models were way more detailed than justified for a city simulation game.


For rendering an individual piece, maybe not; but as part of much larger scene with many objects, animation, and rendering effects, it would place an unnecessary burden on the GPU.

It would be much easier to simply have a 3D artist create the object anew from scratch, in a format and resolution that best fits the game.


Higher end graphics cards probably also mean more detailed scans being available.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: