|
|
Currently each machine on our local network streams individual synchronised and timestamped frames over our gigabit network using only JPEG and PNG compression. The compression uses all CPU cores by dividing the images into chunks and compressing those independently, resulting in full compression taking approximately 10ms. In total it is possible to use up to 30ms for compression. Without the chunking the compression takes near to 100ms and is not real-time.
|
|
|
|
|
|
The data rates are prohibitive once the images are at HD resolution or higher, especially when thinking about streaming over the external network for remote use, meaning far greater real time compression is required to go from 100-150Mbps to closer to 20Mbps. The only likely option is to utilise hardware video encoders such as those found in our new NVIDIA GPUs, although using them whilst maintaining our accuracy and low latency is challenging. It is also not possible to naively compress depth data using colour video compression techniques since they make different assumptions about what data can be lossy during the compression. Therefore, special algorithms will be required to transform the depth data in a way that can then use hardware video encoding, or perhaps a hybrid strategy using a custom CPU based element for the depth encoding. Publications do exist on this subject. One hardware technology available that may be of relevance is NVIDIA's Optical Flow.
|
|
|
|
|
|
Software libraries to consider using: ffmpeg and libav, or directly use NVIDIA encoder api which may be necessary if we are to have sufficient control over latency and sync. |
|
|
\ No newline at end of file |