Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To be honest it looks like it was rendered in an old version of Unreal Engine. That may be an intentional choice - I wonder how realistic guassian splatting can look? Can you redo lights, shadows, remove or move parts of the scene, while preserving the original fidelity and realism?

The way TV/movie production is going (record 100s of hours of footage from multiple angles and edit it all in post) I wonder if this is the end state. Gaussian splatting for the humans and green screens for the rest?





The aesthetic here is at least partially an intentional choice to lean into the artifacts produced by Gaussian splatting, particularly dynamic (4DGS) splatting. There is temporal inconsistency when capturing performances like this, which are exacerbated by relighting.

That said, the technology is rapidly advancing and this type of volumetric capture is definitely sticking around.

The quality can also be really good, especially for static environments: https://www.linkedin.com/posts/christoph-schindelar-79515351....


Several of ASAP's video have a lo-fi retro vibe, or specific effects such as simulating stuff like a mpeg a/v corruption, check out A$AP Mob - Yamborghini High (https://www.youtube.com/watch?v=tt7gP_IW-1w)

Knowing what I know about the artist in this video this was probably more about the novelty of the technology and the creative freedom it offers rather than it is budget.

For me it felt more like higher detail version of Teardown, the voxel-based 3d demolition game. Sure it's splats and not voxels, but the camera and the lighting give this strong voxel game vibe.

Yes, they talk about this in the article and that’s exactly what they did.

It wasn't clear to me how much this was intentional vs. being the limits of the technology at the moment.

I guess the technology still has some quality limitations, otherwise we would already see it in mainstream movies, e.g. to simulate smooth camera motions beyond what is achievable with video stabilization. It's much more difficult to achieve 4K quality that holds up on a movie theater screen, without visible artifacts, than to do an artistic music video.

https://youtu.be/eyAVWH61R8E?t=3m53s

I would say Superman's quality didn't suffer for it.

I would say cost is probably the most expensive part it's also just like "why bother", it's not CG, it's not "2d filming" so it's just niche, like the scenarios you would actually need this are very low.


That's interesting. 192 cameras is certainly expensive. Though they are doing 4DGS, with movement, so they have to capture every frame from different angles at the same time. I assume 3DGS for static environments (locations) would be a lot easier in terms of hardware. E.g. a single drone could collect photos for an hour and then they could create arbitrary simulated camera movements that couldn't be filmed conventionally. But again, the quality would have to be high in most cases. The nature of the Superman scene (some sort of hologram) is more forgiving, as it is inherently fake-looking, which helps excuse artifacts slipping through.

I wonder if you are thinking Source engine? I was getting serious skibidi toilet vibes during several parts of this video.

We will be able to have imax level 3D technically today if you feed it the correct data



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: