Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Much of what makes this appealing has nothing to do with the render engine. What I saw was some really excellent character animation / motion capture. The rendering itself wasn't particularly jaw dropping. And that's not a critique of unity. Rather it's a critique of their chosen subject. Metal, walls, and artificial objects in general are all very easy to render convincingly. Show me some trees, grass, translucency, or volumetrics and then I'll be impressed.


Even then, I'm unconvinced that the ability to do realistic graphics in real time will actually manifest itself in vediogames/VR experiences we can interact with. So much of what makes this look good is the excellent art direction, and as you stated, excellent textures and animations of an artificial thing. But animation students have been producing similar quality visuals for their show-reels for ages. All that has changed is the ability to render faster, but the creation of this content is still being done by the same processes.

For videogames/vr to break the photorealistic barrier, there needs to be some order of magnitude reduction in art development costs for these experiences to be affordable. Not all videogames can be star wars battlefront, where probably $150M ($330M overall cost to develop and market [1]) was spent making the best damn textures videogames have ever seen, but produced a simple, limited game.

Photorealism is a dead end for videogames unless art costs come down.

[1] http://fortune.com/2015/12/30/star-wars-video-game-sales/


Well, John Carmack was impressed by the results of Brigade. And I think Otoy is a company to watch when it comes to VR and 3D.

https://home.otoy.com/render/brigade/


That's so really impressive stuff. The problem is though, that asset creation remains massively expensive.


Thanks! Asset pipeline for Brigade is Octane.


I'm curious how you manage noise in low-light situations. The one video I found had significant compression artifacts so I wasn't able to tell what the actual engine looks like.


It's pretty but the demo doesn't have a hint of motion in it, and such organic objects as there are (plant leaves) are very plasticky. The HDR bloom is lovely, but presumably being faked just as it would be with any other render pipeline.


It's as dynamic as any rasterized game engine. Siggraph 2012 demo has exploding meshes at full speed for example.


> But animation students have been producing similar quality visuals for their show-reels for ages. All that has changed is the ability to render faster, but the creation of this content is still being done by the same processes.

Have you seen modern texturing pipelines? It's most certainly not the same process as 10 years ago. Have a look at this procedural wood flooring generator, built in Substance Designer's node-based texture workflow: https://www.youtube.com/watch?v=Zc5Pdcbjr0U

Same for animation. Doing it by hand won't get you the quality of results that you need for photorealism, so we're bringing in new processes that require a bunch of equipment, a bunch of time, and probably multiple crew members. Used to be you'd make a walk cycle by hand, now it's mocap. With the same old processes, there's no way Star Wars: Battlefront would have been able to make its content in the volume it needed, even with a large art budget.

Now that we know how to create these assets on at least a somewhat feasible budget (compared to armies of artists manually doing 8K textures in Photoshop), the next step will be taking this high-end work intensive stuff and bringing it down to the point where we can crank out similar quality results with lower time investment. It'll take a couple more years, but there's definitely a demand for it.

Releasing today: Substance Painter 2 https://youtu.be/1pIoA34MVBA?t=26 (warning - annoying soundtrack)


Yes, the tools have gotten much much better, that is true. But it still costs an incredible amount of time to generate the art assets, it's a huge part of the budget.


I have not. That's super neat and I'm glad people are making progress! Is there anything happening on the animation side which will push past mocapped movements?


I don't know a ton about animation, but HumanIK might be worth checking out: https://www.youtube.com/watch?v=blLBRmNA3zI

A couple of really cool things about the Substance texturing approach:

Feed it a 3D model and generate maps of its inside corners and outside edges, which can be used to mask layers for dirt accumulation and edge wear https://www.youtube.com/watch?v=zTYia53801k

Squish multiple simple generators/effects together to make really interesting shapes: http://bradfolio.com/art/substance/substance-red-rock-cliff-...

No reference handy, but you can expose parameters like "How much rust" or "color of scattered accent bricks" and make the textures be configurable in-engine. Really streamlines the process if the person building out a level in the game can adjust those sorts of features without going back to the texture artist every time.

There are a lot of neat examples in here: http://polycount.com/discussion/155851/weekly-substance-chal...

EDIT: Here's a cool one http://polycount.com/discussion/comment/2384431/#Comment_238... You can feed it an arbitrary mask to control the color pattern. Note how it's not just painting the color over the bricks, it actually makes brick seams following the edges. And if I had to guess, the dirt/sand accumulation is exposed as a configurable parameter.


Those methods (not texturing, but animation at least) seem to be evolving with the introduction of accessible mocap and 6dof controllers.

http://blog.leapmotion.com/vr-transform-world-3d-animation/

https://www.youtube.com/watch?v=0a_M9VsZ6Lk

http://www.wired.co.uk/news/archive/2015-05/08/animating-in-...


That sounds like the step forward these things need. VR to really get a good look at your stuff, and the ability to mocap while sitting at your pc. Cool stuff.


About [1], consider this:

https://www.reddit.com/r/todayilearned/comments/4412zk/til_s...

Both the topic: 'more spent on marketing than production' and the first post 'numbers can't be trusted because of creative accounting practices'. I wouldn't be surprised if this applied to the game studios, too.

Anyway, you're essentially stating two things. One is that art students have been able to produce this level of quality for their show-reels for ages. If students can do it, it can't be all that expensive. So why did we not see it in games before, but only show-reels? Not because these students or studios couldn't translate a show-reel to an interactive show-reel (i.e., a game). Rather more likely, because there's no market for it because nobody has the money for the hardware to run these things real-time. Now we're seeing this thing render on real-time on consumer hardware that might in 5 years be affordable to mainstream audiences.

Seems to me the biggest barrier was there's no market because there's no cheap hardware that can run high-quality in real-time. Of course cost of art is a factor, a big one, but I don't think it's been the primary limiting factor.

Lastly, the cost of art at any given quality level has come down in a big way. It's offset by increasing demands. But try to buy some assets of 2011 type quality, it's cheap, while 5 years ago you'd have had to hire a big team to deliver that. Once you approach realism, the amount of improvements to quality diminish and you get a build up of cheap assets (textures, template models etc) that can be tweaked and used and bought cheaply. Assets from 5-10 years ago are like cheap commodities already, tomorrow's assets are expensive, but at some point there's a limit to quality improvements and just like every other industry (e.g. smartphones in 2016) you see commoditisation and the cost come down.


A two minute film is not a videogame. A show reel requires one good animation to work in one specific situation. A videogame needs an animation that works generally. Also, making a good animation is hard, and artists deserve to be paid. Think of movies, where there are still movies today which have unrealistic CG, despite computation being far from a bottleneck for a feature film (Legolas on the elephant in LOTR: Two Towers) - it's not the technical challenges, it's the artistic challenges.

Right now, the vast majority of games have characters with fixed walk cycles that are used no matter the terrain. Realistic CG needs to have a walk cycle that captures the subtle changes in gait that corresponds to a given surface. I know people have been doing research on adaptive walk cycle, but afaik it has yet to hit production games.

For generating art, there is hope, as procedurally generated games look fantastic (No Man's Sky), but have yet to expand beyond sci-fi games, or into games with story and specific art styles.

Maybe reusing assets is the way forward, but I'm skeptical. Reusing the visuals just means more army guys fighting in sandy deserts crouching behind crates. Maybe the Storm Trooper models for battlefront really are as good as they get. But even then, I think aesthetically realism can lead to a dead end. The most visually impressive game I've played, other than Star Wars Battlefront is The Witness, which is simple in the CG sense, but has some really tremendous visual aesthetic moments that are something I've never seen done in a game. For me, the stylized look and the artistic/game opportunities that enabled were far more exciting than Battlefront's perfect rocks.


> still movies today which have unrealistic CG, despite computation being far from a bottleneck for a feature film (Legolas on the elephant in LOTR: Two Towers)

One point to note - LOTR Two Towers was released in 2002. 14 years ago.

Yeah, I feel old too.


A videogame needs an animation that works generally.

What are you talking about? The video says "rendered in REAL TIME" so obviously this is working on an XBox with full motion control.


I'm claiming that at this level hardware power, the hardware power isn't the limiting factor in realism, the animations are.

Realistic visuals needs realistic walk cycle. But since realistic walking isn't a perfect stable cycle (it is disturbed by head/hand motions, surface variations, etc), there can be no realistic walk cycle because real walking isn't a cycle. AFAIK current videogame animation all involves walk cycles. Realism at this level of detail needs some new form of animation, not the love and care of a person that is so present in this demo.


regarding the walk cycles on terrain, good work is seeing use with inverse kinematics. For modern era games the technology is readily available and generally robust that solves this issue.

I wouldn't describe the usage as trivial, but it's in line with other toolsets that deal with different issues.


>If students can do it, it can't be all that expensive.

That's not really true at all. It comes down to man-hours and how much time it takes the artist to do the work. A student isn't incurring that cost when they do work for themselves, but a company would be.


A valid point, but even realistic metal isn't necessarily easy. The volumetrically diffuse reflections (that actually get sharper as geometry approaches them) you see of the robot prisoner's arms on some of the walls haven't been possible in real-time until only very recently, and is clearly a new feature Unity is showing off here.

What's most interesting to me, though, is the Unity logo at the end, which I assume must be showing off engine capabilities as well. It makes heavy use of sub-surface scattering, which is of critical importance to lifelike skin and faces.


I'd much prefer if more people tried to improve animation, because state of the art motion captured movements still look like crap in modern AAA games. Graphics is good enough for now.


Even properly done human models are very challenging. You can spot a fake 3d actor in no time, even in movies.


This was the worst part of Martrix II.


Matrix Reloaded was 13 years ago. That's a bit unfair. You can still tell, I think, especially if you know where to look, but I don't think most people would notice, in Fast and the Furious 7, for example.


Nah, it was shoddy work. Compare Terminator 2 which was 25 years ago.


Terminator 2 used far, far more practical effects than one is likely to assume these days. The iconic shot of the t-1000 blown in half and sewing it self back up? Practical effect. Really amazing stuff over all. https://www.youtube.com/watch?v=EYQMfT6nsQs


Or skin, hair, and clothes.

I still found the motion capture and the crisp rendering quite impressive, though.


I always find the it's the physics that break the feel of the game, not the visuals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: