... | ... | @@ -2,4 +2,6 @@ Currently each machine on our local network streams individual synchronised and |
|
|
|
|
|
The data rates are prohibitive once the images are at HD resolution or higher, especially when thinking about streaming over the external network for remote use, meaning far greater real time compression is required to go from 100-150Mbps to closer to 20Mbps. The only likely option is to utilise hardware video encoders such as those found in our new NVIDIA GPUs, although using them whilst maintaining our accuracy and low latency is challenging. It is also not possible to naively compress depth data using colour video compression techniques since they make different assumptions about what data can be lossy during the compression. Therefore, special algorithms will be required to transform the depth data in a way that can then use hardware video encoding, or perhaps a hybrid strategy using a custom CPU based element for the depth encoding. Publications do exist on this subject. One hardware technology available that may be of relevance is NVIDIA's Optical Flow.
|
|
|
|
|
|
Software libraries to consider using: ffmpeg and libav, or directly use NVIDIA encoder api which may be necessary if we are to have sufficient control over latency and sync. |
|
|
\ No newline at end of file |
|
|
Software libraries to consider using: ffmpeg and libav, or directly use NVIDIA encoder api which may be necessary if we are to have sufficient control over latency and sync.
|
|
|
|
|
|
**Note: ** This has been implemented to some extent, we can now encode and decode using NVIDIA hardware and the HEVC or H.264 Codecs. Improvements are still possible. |
|
|
\ No newline at end of file |