Once the raw point clouds have been combined and fully processed they need to be rendered from the point-of-view of the virtual camera. There are several strategies for this, specifically splatting but usually combined with some surface searching algorithm. This task is essentially open ended as quality can always be improved.
Challenges:
* Achieving sharp smooth edges to objects.
* Maintaining colour accuracy, not too much blurring or noise.
* Smoothing noise but without losing sharp details in the surface.
* Estimating a surface from noisy point clouds.
At present CUDA is used directly to take each corrected point cloud from each camera and transform all of those points onto the screen of the virtual camera. A second step is then applied (optionally) to splat those points to expand them over the surface they represent in order to fill the gaps between the points and create a complete image. This is done in two steps, first to generate a depth map and second to then apply colour to the surface.
Another possibility is to use OpenGL instead of CUDA, which might boost performance but would reduce flexibility. There is already code available for point splatting using OpenGL.
Finally, whilst the reconstruction machine could do this rendering (and currently is), it is also possible that the client machines observing these virtual views can do the rendering with the point cloud data being sent over the network. This strategy would allow a degree of local movement around the scene independent of the reconstruction machine, addressing some of the latency issue and allowing more rapid response to user inputs moving the virtual camera location.