This post is inspired by new Unreal engine tech demo, made by ILM and showcased on GDS 2018. This demo was rendered on a Nvidia DGX station with 4X Tesla V100 via NVlink at 1080p 24fps. Thats 4 GFX cards that cost $8700 each.

Before starting I would like to explain that his post isn’t ment to bash this excelent work by ILM, but it is ment to be a simple excersize in understanding what we really perceive as photorealistic and how much we actually need or don’t need extreme tech. And to ask ourselves if we actually need raytracing to acheive a same effect, or if we can do this also on a consumer hardware with a tech that isn’t so recent.
I’m starting with one simple statement: cinematic look and photorealism hasn’t much to do with physicall correctness. I know that many of readers will frown at this statement. But before doing so, take some time.
How most physical world cinematic photography is done today? Do we take a simple camera and shoot what we see? No. Cinematic photography has a lot to do with faking and controlling how real world optical physics transfers to a screen. Just think of color correction. Color Correction is actually modifying light behaviour in many ways. When you change contrasts of your image, you are actually changing light behaviour and it’s natural range and falloff. The result that you get with color correction isn’t anymore physically correct, but it can be defined as photorealistic. The same happens with framerate modification: we tend to perceive 24fps (cinematografic framerate) as more realistic than 50 fps or 100 fps. This is because we are seeing on screen something that we expect to see, and that we associate with high quality “cinematic” look.
Also, if we look how many movies are photographed on set, we will notice that filmographers have to use many different techniques to avoid non wanted reflections. A make-up itself comes in mind as a most common tricks –  it serves to eliminate unwanted reflections from actors face.
When we light a scene that has to be filmed in real life, it all comes to artistry, to “faking”. In CG we can do it even easier, as we don’t have to pysically move objects, or worry if some reflections will appear. We have a max control of our scene.
So, it’s time to come back to Unreal demo video: why this video looks good? Because of raytracing? My opinion is different – I think that it looks good because of artistry of lighting, modelling and texturing and that it could be as good as it is even without costly raytracing, by using existing a pretty common features available in a game engine.

To somehow prove this point of view, I’ve used some Star Wars models that I had lying around and tried to make a similar scene. This small demo made in Unity was really just a fast test (it was assembled in 30 minutes). It renders at 60 fps in HD resolution on an Alienware Laptop with a single GTX 1080. Unfortunately, I didn’t had any rigged models, so I was unable to animate them.

And here you can see it working in realtime:

2018-03-23 15_31_11-
This doesn’t mean that we don’t need any raytrace – there are many examples and use cases where it can and will be pretty important: Automotive and product design and VR. But unfortunately, we will have to wait until raytracing will become feasable in VR. Also, a raytracing hardware could speed up other processes.
Most of my concerns expressed here are related mainly to this tech demo.

EDIT: After some critics on this video i have understood that it probably doesn’t demonstrate well a “correctness” of reflections. So I’ve made another one that is more dynamic.

One thought on “Do we really need realtime Raytrace?

  1. the most advantages of RTX is real reflection [world and local] against of SSR that have to using reflection probes for realtime things …
    also real refractions and real GI :]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s