What is R.ITA?

R.ITA (Rendering InTerActive) is a technology demonstration / case-study of what I think what a production render of the next decade (or in the 201x) should look like. You could say it is the first generation of a so-called "smart"-renderer. A fusion between final frame rendering, post processing and image manipulation software. With a unique toolset to leverage 3D artists into the next decade. See diagram and description below for a more detailed comparison and discussion of smart-render.

What R.ITA is not?

R.ITA is not and will not be a product neither commercial nor free. The reason why is simply because the limited amount of resource that can be spent on it. R.ITA as it stands at the moment is a spare-time project and only gets the amount of attention that is bearable without interfering too much with private life.
However if anyone is interested to pour more resources into this project to make it eventually a product or interested in the technology itself feel free to contact

Smart Renderer vs. Standard Renderer

Lets face it your standard production renderer is pretty dumb and the standard workflow more or less looks like this (slightly simplified):

This means you have your 3D scene containing geometry objects and light sources, materials/shaders, textures associated to these materials, light sources or camera (environment), etc., the camera itself and some specific render settings. With these the render engine will at the end produce hopefully a satisfactory final image. This is pretty much a one-way system. Meaning every time any of the inputs changes the render engine has to start from the beginning again and produce a new final image which depending on the complexity of the combined inputs can take seconds, minutes, hours or even days.
Not only does this feel like a massive waste of computational resources since a lot of the time the inputs will not change that dramatically but also a lot of unnecessary unproductive waiting time.
Now of course some will now argue make the render work faster or provide faster feedback and yes a lot of people seem to be working on this and a lot of final frame renderer started to go down the progressive refinement strategy road to present faster feedback to the user. But this doesn't make the render any smarter at all or okay maybe a little bit depending how the refinement strategies are implemented. However, having feedback early on helps to find out if the input and settings are good but often it still is necessary to wait until a decent quality has been reached or the render has produced the final image and again any change in the input will start the process from the beginning again. Even worse considering that the end image might not actually be what a client initially wanted or suddenly decided otherwise for example that the car in the picture should be green not red or a darker red. There goes hours of rendering down the digital drain.
In comparison the workflow of R.ITA or what I define as a smart render instead would look like this:

The main difference is that the render engine does no longer produce the final image but instead fills up a sort of visibility database that stores all the visibility information on each pixel in the image. A visualizer module then can take the material, texture, and other scene information like light source positions and compute the final image. The really interesting bit of this is that both processes can run completely independent from each. Potentially even on a different hardware or machine. The visualizer can query the visibility information and produce the final image while the render engine fills up the database in the meantime. Even better every change the user makes that does not impact the visibility dramatically can be processed immediately without starting from scratch. Therefore, it reduces a huge chunk of the standard rendering process to a data storage and database problem and basically trades processing time with memory storage.
The demo movies page show a glimpse on what sort of processes and interaction will become possible in such a system.

What technology does R.ITA use?

R.ITA uses certain APIs and rendering modules but the general idea or technology itself is not restricted to any of them.

For the rendering engine at the moment R.ITA uses mental ray and extracts all the information via a custom written shader that can be applied to any application that provides mental ray and allows for plugging in custom shaders. As mentioned before any production render should be capable of doing this however mental ray has a very powerful and flexible shading API and allows to exploit its functionality without the need of writing or changing anything in mental ray itself.

The visualizer module is a custom written application using Direct3D 10.x taking full advantage of modern GPU shading capabilities. The processing, data storage and access is arranged so it fits better to GPU processing using the Direct3D API. But it could as well easily be written in OpenGL 3.x, OpenCL or alternatively pure C++ and run on any kind of hardware. For maximum speed and interactive feedback a highly parallel architecture would be recommend though.

Want to know more, have some comments?

For any questions, comments, feedbacks and inquries please email at

please refrain from comments on the artwork since no artists were used in the process :)