Use variance and constraints to merge and correct depth maps
When merging clouds from different cameras, it may be possible to flag volumes (voxels) as definitely empty and force point corrections if some point subsequently requests to be added to a definitely empty location. The point being moved to correct must move in the depth direction of the original source camera until it reaches a potential location. This would I think help to sharpen resulting object boundaries and smooth surfaces over all cameras.
Currently I am not sure what data structure would be best for this (voxel hashing below?)
See: Kutulakos, K. and Seitz, S. (2000) "A Theory of Shape by Space Carving". Although I'd optimise this process using our disparity data so not purely carving as described in the paper.
And see real-time GPU algorithm for essentially the above concept: "Real-time 3D reconstruction at scale using voxel hashing". Code here: https://github.com/niessner/VoxelHashing
Another option is given in: "Merging overlapping depth maps into a nonredundant point cloud"
The above method has source code located here: https://github.com/tomikyos/kinect-merge