Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Voxel Quest Fluid Dynamics and New Rendering Method (voxelquest.com)
194 points by akavel on May 20, 2015 | hide | past | favorite | 65 comments


Author here - as mentioned I was not expecting this to get posted but I am happy to answer any questions or elaborate further. For so long as I can stay awake at least. :)

Some other shots worth looking at:

Some terrain I quickly whipped up based on this method, using Voxel Quest's actual generated map: https://twitter.com/gavanw/status/589499986854838272

Screenshot of the test scene, with median filter: https://twitter.com/gavanw/status/590895532509265920

Here is a short vid showing that it is in fact volumetric: https://twitter.com/gavanw/status/590903499342229504

A few basic tricks I use:

A "macro" pass that ignores small, expensive to compute details.

A "micro" pass that avoids texture reads but produces complex algorithmic textures (texture reads will KILL your performance in many cases because of cache invalidation).

Supercover lines used for raymarching to precisely march through each cell.

http://lifc.univ-fcomte.fr/home/~ededu/projects/bresenham/


Can you give an overview of the technique you are using for this project (in general, not just this demo) for someone with only undergraduate level graphics knowledge?

Do you basically draw two triangles that fill the whole screen and then render everything using a shader?

And how do you store that many voxels? A sparse data structure? When you have a large solid area do you simply say this entire area is filled so don't have to represent the voxels individually?

I saw a video where you cut away sections of a building using a brush. Do you store the building voxels and then store the subtraction separately? If you had a world with thousands of buildings in it and cut away a little at each does that mean you can no longer share the shape of each building and space requirements blow up?


> Do you basically draw two triangles that fill the whole screen and then render everything using a shader?

Yes. It determines the ray origin and direction for each pixel, and casts out a ray from there with pretty standard ray tracing methods and a few additional tricks I mention above.

> And how do you store that many voxels? A sparse data structure? When you have a large solid area do you simply say this entire area is filled so don't have to represent the voxels individually?

No voxels anymore, although it can be fairly easily "voxelized" by clamping the step space of the rays to approximate towards the nearest cube. So, I guess I might have to change the name, or make it look more voxel-ly :)

> I saw a video where you cut away sections of a building using a brush. Do you store the building voxels and then store the subtraction separately? If you had a world with thousands of buildings in it and cut away a little at each does that mean you can no longer share the shape of each building and space requirements blow up?

You can store a million modifications in a few megabytes. I was never intending for it to allow that many though, but I will figure out what corner cases are worth addressing.


> You can store a million modifications in a few megabytes.

How are you localizing the modifications? I have read of techniques for allow modifications for work like you did previously, by tweaking a small set of inputs based on the modifications you need.

However I don't know how you would do that in a ray marching scheme.

I guess you could do a more traditional render of simple objects into a buffer to generate a mapping of what each pixel needs access to combined with storing your changes in a texture. However that would kill your cache as locality would be painful to maintain...


Modifications can be localized in each cell or voxelized. The old method I used just voxelized the modifications. In ray marching with SDF union and subtraction are easy. You can load localized modifications for a cell with one set of texture reads. Will prob be roughly approximated and limited.


Are you doing ray tracing or ray marching? You seem to mention both.

Very neat stuff! I've been playing around with ray marching and it's awesome what can be done. I love finding mature projects to inspire me.


Primarily ray marching. I tend to switch / misuse the terms. I do perform a few calculations without marching though (ray intersection with some types of geometry).


> No voxels anymore

So how do you represent geometry then? Mesh? SDF? Space tetrahedralization?


Yeah that's what I'm really interested in. I get how the shader rendering works for amazingly complicated geometry when it's just a simple function to calculate it, like a fractal or something like that, but when you have a world full of irregular, non-repeating shapes how do you represent that?


Each cell can contain any number of arbitrary shapes (and they can intersect, despite what my test environment might suggest). Even the most irregular shapes can often be approximated mathematically, in ways that you might not be as complex as it seems. I would read up here, there is a wealth of information on these topics (and lots more if you google around): http://www.iquilezles.org/www/index.htm


Same way I've always done in VQ - with math/logic :)


I don't get it - do you write a mathematical function that says whether or not you have intersected with anything in the world? How does that not turn into a huge piecewise? How does that allow you to change the geometry, as I guess it gets compiled into the shader?


You can get somewhat of an idea how it works here:

http://www.voxelquest.com/news/how-does-it-work

You can change geometry by passing parameters to to the shader or creating special object generation functions within the shader. This is not intended to render everything in the game, mostly just aspects of the environment that can be easily procedurally generated like terrain and structures. Other things can be merged in from a traditional pipeline like using polygonal character models (or even sprites, as I have shown in my past demos).

The more that you can define mathematically, the better. Doing texture lookups is almost an order of magnitude more expensive sometimes, and less precise always.

You start really simple, and say did I hit the bounding box that contains the object? If so, did I hit the more complex piece of geometry within that? If so, what UVW coordinate did I hit on the geometry? Marching through the UVW coordinates on the ray, you can create any sort of procedural texture. I have shown a voronoi texture and a shingle texture but really anything can be created and some procedural textures are very cheap compares to others (voronoi is relatively expensive). If you look at old shots of my terrain, it is all created with voronoi and noise pretty much.

http://www.voxelquest.com/gallery.html

It is possible things will get too expensive to do in realtime, but the option will always be there. Worst case scenario is that I can render like I did with my old isometric engine and cache the render results. At the very least the option will always be there for more powerful computers in the future. That said I think realtime is feasible on decent midrange/modern hardware.


This website has some nice explanations (old, but still relevant)

http://iquilezles.org/www/articles/raymarchingdf/raymarching...

Edit: Direct link to Iñigo Quílez' s presentation http://iquilezles.org/www/material/nvscene2008/rwwtt.pdf


Can you provide more details on this? It's where it gets interesting... :)


Yep, to avoid duplicating answers see if the above info that I just posted answers any of your questions :)


One other note is that there has been quite a bit of progress in other aspects, if you have missed the past updates:

Sprite rendering, smooth movement, collision, and more: http://www.voxelquest.com/news/update-04-08-2015

AI and some other stuff: http://www.voxelquest.com/news/update-03-11-2015

Underwater: http://www.voxelquest.com/news/update-11292014

Full list of videos: http://www.voxelquest.com/videos.html


I didn't realize they were called "supercover" lines... I use the same algorithm in my 2D raycast projection renderer.

Very nice work! I'm super-impressed by every update you make.


Thanks! I had no idea what they were called until I googled around. :)


Looking forward to a full breakdown on this :)


Do you have any reflection demos yet?


Yes, here: https://twitter.com/gavanw/status/590906524177801216

But reflections are slow, so I am not using them for now - all lighting occurs in screenspace for now.


This sounds a bit like Euclideon's Unlimited Detail tech demo video, which got some press a couple of years ago and then vanished without trace:

https://www.youtube.com/watch?v=00gAbgBu8R4

(Warning. Video may contain critical levels of overhyped vapourware.)

To me it sounded like Euclideon had figured out a new wrinkle in efficiently storing and searching very large instanced voxel spaces, and Voxel Quest sounds very similar. While Euclideon's demo was obviously a rigged tech demo, it still looked very cool, and I'm glad that it's turning out that this sort of thing is feasible.


Oh wow was not expecting this to go up on HN...and I thought I was going to get some sleep. :) Primary difference between Euclideon's stuff and mine is that I am willing to disclose how my stuff works. It is actually a really simple combination of marching in a grid and then working down into that grid cell with distance field marching. I am doing a writeup on it as soon as I get some time. :) The primary difference from the old methods I used in Voxel Quest is that the old methods calculated the full volumes, whereas the new method calculates only the visible surfaces.


Very cool! Are you doing this on the CPU or GPU?


Fluid sim is CPU, all rendering is done on the GPU though.


Since I happen to know the founder of Euclideon and the company a little bit I have to emphasize here that their tech was and is real. Sure in the videos it wasn't presented in a neutral academic setting but it's far from rigged and in fact is being used in the geospatial industry today.

Voxelquests technology looks very promising too and I am looking forward to seeing this applied in a game setting.


Nobody who knows what they are talking about has ever claimed Euclideon's tech isn't real. What those people have said is that A: it isn't revolutionary because B: it's pretty obviously simply an implementation of a well-known technique with C: well known and quite significant limitations that constrain it to a very small useful space which is the reason the many, many people who know and understand the technique quite well tend not to use it for anything and D: not even a shred of evidence that they have overcome the limitations of the technique. And E: as a consequence of A-D the verbiages constantly pumped out by the company is grossly overstated to the point of either just riding the line of fraudulent, or crossing it. Particularly as the constraints of the technology generally precluded it from being useful in the fields they claimed it would be useful in.

Using it for sort of 3D photos of real-life spaces isn't a bad use, and I have no problem believing it's a real application of the tech. However C and D above remain in full force.


For those who didn't seen it - Euclideon released a new video about that: https://www.youtube.com/watch?v=5AvCxa9Y9NU


That video is actively deceptive in at least one point:

He starts out explaining that current engines lack realistic lighting.

Then he shows scenes with realistic lighting.

He fails to mention that current engines create lighting from mathematical formulas and can be changed easily and on the fly. In fact for many of them it's entirely possible to change the day time in realtime, resulting in changes of lighting direction, intensity, color.

Meanwhile the Euclideon lighting looks realistic because it's rendering camera-recorded images onto voxels, which also means that the lighting cannot be changed at all. It's a static snapshot. In order to get different lighting into the scenes he's shown they'd need to do a different recording at another time of day, and of course completely arbitrary lighting is impossible at anything near that level of quality.


"Euclideon Makes World’s Most Realistic Graphics"

Yeah, honest and humble as always. I wonder how the people who actually developed the algorithms and procedures they used about this.


The same way everyone in any industry takes credit for work done by other people.

And it's just product marketing. Most of the world aren't engineers. Boring them with technologies and specifics isn't how you sell products.


It isn't 'just product marketing' Their videos are ridiculous and prey upon people not being able to identify what they aren't seeing.

They made grandiose claims and diverted attention to things that didn't matter. How many games are using what they've done?


It seemed pretty clear that they were more interested in geospatial, real estate etc industries. Gaming is not even a sensible industry to break into given how fundamentally different the workflow would be to use this technology.

And sure it was a little over the top but then again so is most marketing.


That is 100% not true, their videos were of them running around in video game engines and pointing at low poly trees!


> Gaming is not even

They claimed themselves that two games are being made with their engine.


As soon as he said, "These are images of the real world", I immediately thought they just looked like high quality renders.

Wonder if there's a kickstarter in their future?


It's nothing new. Textured 3D point clouds are a vast topic with huge amounts of research. Euclideon just successfully markets hype to rich people and seems able to write software less sucky than the existing solutions.


Oh, I know that, something about how this is all presented just rubs me the wrong way.

I feel like the videos about it mis-state critics objections. Instead of "critics have pointed out that this involves massive amounts of memory and that arbitrary and complex animations will likely cause problems" it's just "critics said it couldn't be done".

Also, the talking about investors at the end there rang alarm bells for me.


The only thing they did differently was marketing and hype. They didn't invent anything, they didn't come up with anything new. Their own rendering looked terrible while they got up close to low poly trees to try for a comparison. Snake oil pure and simple.

3D looking good is hugely about lighting, geometric detail is not really a problem.


So just you know, Eucclideon have made a commercial application using their technology - Geoverse, a set of tools for geo-data.

Yes, it sort of works. http://www.atpress.ne.jp/releases/33345/a_1.jpg


No discussion of ray marching is complete wihtout mentioning Shadertoy (http://www.shadertoy.com) and the work of great shader artists such as Iñigo Quílez, Paul Malin, Dave Hoskins, Reinder Nijhoff and many others. If you are interested in these techniques, check it out.

One of my favorites is Elevated: https://www.shadertoy.com/view/MdX3Rr.

I've converted many of the most beautiful ones to auto-stereoscopic 3D for Tao3D (http://tao3d.sourceforge.net). An infinite landscape in glasses free 3D is a thing of beauty :-)


I've seen this website mentioned many, many times. What kind of background would one need to begin to start understanding/making these things?

Every time I see this site, I think I should figure this out. I don't see any sort of introduction area, so I click on a shader and am presented with some totally indecipherable code.


The maths isn't hard, but you do need a decent grasp of vectors and (optionally) 3D transformation matrices, and it's pretty opaque until you figure out the terminology.

I haven't done GPU raymarching myself but I've used the same techniques on the CPU side in order to develop a whole-planet procedural renderer (in Ada, of course); the raymarching was relatively straightforward but I got horribly bogged down trying to make a volumetric atmosphere work.

One resource I found really useful was Iñigo Quilez' website; in particular, his article on basic terrain raymarching explains most of the concepts: http://www.iquilezles.org/www/articles/terrainmarching/terra...

(Also known as iq, he's responsible for a number of amazing demos, including the 4kB demo _Elevated_: https://www.youtube.com/watch?v=jB0vBmiTr6o He also appears to have written ShaderToy. Search for iq to see his stuff omfg ShaderToy lets you write shaders to produce _music_ now? https://www.shadertoy.com/view/ldXXDj)


When I first looked at many shader examples, such as those with SDF, it looked like gibberish.

The first thing you need is a firm understanding of geometry and basic vector/matrix operations (add, sub, mult, dot, cross, etc). You might think you remember stuff from your highschool math classes but if you are not actively using this stuff frequently you will not truly understand it.

For the record I was really bad at basic math, and barely skimmed by in highschool. It wasn't until I started programming heavily that I really learned how everything worked.

Start as simple as possible - rendering a triangle on the screen with polygons (there are a million tutorials for this). Then move up to cubes or something. I recommend starting with Three.js as it very easy to get up and running. Eventually you will get into shaders, which are fairly easy to understand but do have quirks you will gradually learn.


I agree! I learned a lot from both Shadertoy and IQ's website.

Also, while we are mentioning people, I should mention that Jon Olick is running a Kickstarter right now:

https://www.kickstarter.com/projects/1760210928/voxelnauts-v...

Jon has been a major pioneer in voxel research and ray-based techniques.


Looks awesome.

Gavan, any advice where to get started on graphics programming? I'm an ex-games programmer (AI, networking) who always avoided graphics like the plague - too many weird hardware permutations, fallbacks, crappy drivers, etc. Now that I'm not in the industry, I'd like to learn for fun. But I have no idea what the "sweet spot" is for desktop (non-mobile/console) graphics - DirectX? OpenGL? What version of vertex/pixel/geom shaders are the norm? Are there standard practices for shadows/lighting now, or are they still done on an ad-hoc basis? How normal is GPU ray tracing? Virtual textures? And so and so on. I haven't been able to find anything that kind of discusses these issues at a comprehensive/holistic level. Anything useful you've found?


Thanks!

My advice is to find a good set of tutorials and start ripping them apart. In another comment I mentioned Three.js is a great way to get up and running fast. WebGL has limitations but the ease of development pays off especially if you are just learning.

TBH does not matter what versions you use too much. I am still using one of the older shader versions (1.2 or 1.3) but I will probably upgrade just because there is not a major downside to using a newer version.

Lighting and shadows vary everywhere and there is no best way, just depends on your use case. Cascaded shadow maps are quite common in high-quality, polygon-based pipelines. Commercially, GPU ray tracing is very rare, if for no other reason than the tooling is just not there like it is for polygons.

Ultimately, no fast path - you just take it one step at a time starting with simple goals and you will ultimately get there. Even a year of hobby development will make a dramatic difference in your understanding, but the payoff is slower from there.


I'm not gavin, but I like the style of http://www.amazon.com/Math-Primer-Graphics-Development-Editi... and recommend it to anyone getting started. You don't need any more than high-school maths and some familiarity with programming in your language of choice (though the example code is in C++).

If you're just getting started and want a low-impact way to prototype things try WebGL! (I'm an unbiased WebGL contributor and user).

OpenGL seems to be the way to go on most platforms; DirectX is probably easier on Windows. There are libraries that take care of the platform-specific code for you if you don't want to write your own that are quite good: SDL2, glfw, etc.

hth


I'd be curious to know what the new algorithm is that dispenses with "chunk loading" allows all scene data to appear instantly.


I expect that it's signed distance field ray marching.

You can see a demo of the tech here: http://m.youtube.com/watch?v=lwFVlNytq0Q

And a presi of how that demo works here: http://on-demand-gtc.gputechconf.com/gtcnew/on-demand-gtc.ph...


Wow, thanks!


Basically its the difference between calculating a full volume and just the surface of that volume. The old chunks had to store points or cached bitmaps (depending on if it was the isometric method or the perspective method). The new method stores nothing, and generates the scene instantly based on the camera's viewing volume. As mentioned above, its really simple stuff (at least, as simple as this stuff gets!), not new - it is based on distance field marching and grid marching.


>> I'd be curious to know what the new algorithm is that dispenses with "chunk loading" allows all scene data to appear instantly. Looks like instancing to me. There isn't really that much data shown at once.


Its all "unique" - have a look at:

https://twitter.com/gavanw/status/590884138871230464

That said, the uniqueness is limited to the generation algorithms and parameters you pass to those algorithms. Each cell can be completely unique, I vary the color and shape here but you could vary any generation param.


Yeah, when I saw the video on the VoxelQuest site it immediately reminded me of this demo of raymarching/signed-distance-fields: https://dl.dropboxusercontent.com/u/27844576/raymarch/raymar...


Yep, it uses SDF in the detail pass :)


Looking forward to playing with the API and building my own worlds/demos/games :-)

I'm hoping your license terms let folks initially play with it for free like the UT4/Unity licenses.


Full source will be up on Github (with possible exception of any code that needs to be secure like server stuff). It will get leaked/pirated anyway, so not going to bother charging for the source.


I understand if you just don't want to deal with commercial licensing and developer support. But really, regardless of if your code is leaked or not you can get royalties the same way Unreal does on commercial products that use it.


Yeah there will be some sort of license in place although I hope to one day have it under permissive open source. All I mean is that I'm not limiting code to people who have purchased the game. :)


Did they just add a feature or replace the whole rendering engine? It looks like a replacement with the second video. I start to wonder if it will ever be finished.


Much of the old rendering pipeline is still there for materials, fog, post process fx, screenspace lighting, etc. The only part of the rendering that has changed is the part that generates the depth and normals. I have been using a test scene to quickly get some other stuff up and running, stress test things, and debug. But its still capable of rendering the old stuff, although there is a bit of work to get it back in. :)

I think the performance improvements were a necessary evil. The old chunk loading was simply too slow and had a negative impact on gameplay. I'm hoping to keep things going on the gameplay front as well but I am only one man! :)


To me it looks like the effort was worth it. The chunk loading was a showstopper where as this looks really slick now.


Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: