... | ... | @@ -61,16 +61,16 @@ The prototype is to focus on tele-immersive video and interfaces required to ope |
|
|
* **O4.3:** Use the platform for non-remote data capture for participant research, providing at least some analysis or data extraction tools to demonstrate potential towards S1 and S4.
|
|
|
|
|
|
# 2. OVERALL DESCRIPTION
|
|
|
The prototype will be a new platform that enables the capture, mixing, filtering, analysis, recording and streaming of teleimmersive 3D audio-visual data in a real-time, modular and flexible manner so that individual data sources or algorithms can be replaced or evolved in the future. The key components are therefore:
|
|
|
The prototype will be a new platform that enables the capture, mixing, filtering, analysis, recording and streaming of teleimmersive 4D audio-visual data in a real-time, modular and flexible manner so that individual data sources or algorithms can be replaced or evolved in the future. The key components are therefore:
|
|
|
|
|
|
* **C1:** *Capture* - 3D Scene Capture from fixed lab spaces as a 3D-AV data source
|
|
|
* **C2:** *Mixing* - 3D-AV mixing and filtering system, from any number of arbitrary data sources
|
|
|
* **C3:** *Analysis* - 3D-AV Analysis system to extract higher-level data about a scene
|
|
|
* **C4:** *Recording* - 3D-AV recording and playback as a generic 3D-AV data source
|
|
|
* **C5:** *Streaming* - Lossless and lossy compression of 3D-AV for storage and streaming
|
|
|
* **C6:** *Presentation* - Interactive visualisation front-end(s) from a 3D-AV stream
|
|
|
* **C1:** *Capture* - Scene reconstruction from fixed lab spaces as a 4D-AV data source
|
|
|
* **C2:** *Mixing* - 4D-AV mixing and filtering system, from any number of arbitrary data sources
|
|
|
* **C3:** *Analysis* - 4D-AV Analysis system to extract and embed higher-level data about a scene
|
|
|
* **C4:** *Recording* - 4D-AV recording and playback as a generic 4D-AV data source
|
|
|
* **C5:** *Streaming* - Lossless and lossy compression of 4D-AV for storage and streaming
|
|
|
* **C6:** *Presentation* - Interactive visualisation front-end(s) from a 4D-AV stream
|
|
|
|
|
|
Additional virtual data sources may also be considered to allow mixing of real and virtual content into 3D-AV scenes, therefore suggesting a requirement that the representation used be independent of, or flexible with respect to, source. The analysis outputs are also to be attached or embedded in some manner into the 3D-AV stream, for example semantic information about objects such as classification or motion information. This analysis information may also be used for predictive purpose to resolve latency issues and help with both compression and initial source capture quality.
|
|
|
Additional virtual data sources may also be considered to allow mixing of real and virtual content into 4D-AV scenes, therefore suggesting a requirement that the representation used be independent of, or flexible with respect to, source. The analysis outputs are also to be attached or embedded in some manner into the 4D-AV stream, perhaps as an addition data channel. For example, semantic information about objects such as classification or motion information. This analysis information may also be used for predictive purpose to resolve latency issues and help with both compression and initial source capture quality. To be used in this way the data must flow not only forward with the stream but also back to previous processing steps.
|
|
|
|
|
|
Overall there must be a great deal of user control over the configuration of the platform to enable custom mixing, filtering, recording and presentation. However, the use cases presented in the next section can be used to limit the scope of flexibility.
|
|
|
|
... | ... | @@ -78,6 +78,15 @@ Overall there must be a great deal of user control over the configuration of the |
|
|
What is to be captured, the kinds of mixing and filtering, as well as expectations on interaction and visualisation all depend upon the target audiences initially identified by the 5 usage scenarios in section 1.
|
|
|
|
|
|
## 2.2 DEPENDENCIES AND ASSUMPTIONS
|
|
|
We will rely, at least initially, on OpenCV and the Point Cloud Library. It is also assumed that we will not be using active depth sensors, although this should be a backup plan.
|
|
|
|
|
|
# 3. SYSTEM FEATURES AND REQUIREMENTS
|
|
|
|
|
|
## 3.1 FUNCTIONAL REQUIREMENTS
|
|
|
|
|
|
## 3.2 EXTERNAL INTERFACE REQUIREMENTS
|
|
|
|
|
|
## 3.3 NON-FUNCTIONAL REQUIREMENTS
|
|
|
|
|
|
|
|
|
|