Recetly I was doing a personal challenge of remaking scenes from Blade Runner 2049 in Unity. For this task I wanted to make all shots as cinematic as possible. We have some really nice tools in Unity to do this, but those are made for realtime graphics and for performance – and, for anyone that works in video production, it can be seen that those scenes are made with hardware renderer. Mostly because of aliasing issues (temporal and spatial), DOF behaviour, motion blur behaviour. So I had an idea of translating real camera behaviour into an asset that simulates physical camera behaviour – and then renders Unity scenes into a video file.
This tech boosts Unity rendering capabilities and works in combination with Post Processing stack while augmenting it’s capabilities. It is completely integrated into Unity, and doesn’t require any change of a workflow or a usage of custom shaders. This means that you can take your scene, render it with Deckard Render, and it will look much better than in Unity in realtime.
-Real soft shadows. Any unity light can behave like a soft light or area light resulting in smooth shadows.
-Temporal and spatial antialiasing for perfect filmic motion. This means also that there is no more shimmering with highlights and bloom. No more moire effects in your footage.
-Physical Depth of Field that works with transparent objects and particles. It will give correct DOF even on materials that are using parallax, displacement or fake interiors shaders.It also supports anamorphic behaviour of Depth of Field. This kind of depth of Field works with any transparent, refractive, or reflective object. For example, it is known that while using small DOF, a glossy object can be in focus, but it’s reflection will be out of focus. On a contrary, if a glossy object is out of focus, reflections seen on it’s surface will be in focus. Deckard renderer deals with all those cases.
-Motion blur that works on interframe basis (working correctly with circular or curved motion). It can give you a correct depth of field on transparent objects, like particle systems, glass refractions, specular highlights.
-It smooths out any Unity Post Processing effect, making it’s appereance seem more real.
-Deckard renderer is compatible with all Unity standard and custom shaders.
Deckard renderer isn’t a real ray-tracer, but it works on principles of Nyquist–Shannon sampling theorem while sampling pixels in space and time. Most antialiasing tech is done by super-sampling frame buffer (rendering at a double off a screen resolution and then resizies it down to screen resolution). This process requires a lot of video-RAM memory, and still doesn’t produce perfect antialiasing. Deckard rendered goes further and uses various techniques to simulate real optics physics. The “side effect” of this procedure is that even rendering at smaller resolution results in a more natural looking image. This approach of rendering behaves much as in an analog camera systems.
Typical usages of this asset:
-Rendering high quality presentations of 3D environments, like demoes or showreels
-Using Unity instead of other Offline renderer systems
-Pre-visualisation or production for motion picture movie (and testing real Depth of field effects) and TV
-Testing what will be possible to do in a future with realtime graphics
-Tool for pre-imaging scenes and correct camera look.
This tool can export video files in JPG, PNG oer EXR sequences, or mp4 video format file. If you are a user of a VR Panorama (my first asset to hit Assetstore), you will be at home with this tool.
Deckard renderer still relies on some post processing techniques, like bloom, AO, reflection probes, and Screen Space reflections. But due to it’s nature it mostly corrects issues with those effects, enhancing overall look and smoothness of those effects. Some of those effects that require temporal sampling begin to really shine (like volumetric light rays or realtime GI).