Update Requirements authored by Nicolas Pope's avatar Nicolas Pope
......@@ -64,7 +64,7 @@ The prototype is to focus on tele-immersive video and interfaces required to ope
The prototype will be a new platform that enables the capture, mixing, filtering, analysis, recording and streaming of teleimmersive 4D audio-visual data in a real-time, modular and flexible manner so that individual data sources or algorithms can be replaced or evolved in the future. The key features are therefore:
* **F1:** *Generation* - Scene reconstruction from fixed lab spaces as a 4D-AV data source
* **F2:** *Mixing* - 4D-AV mixing and filtering system, from any number of arbitrary data sources
* **F2:** *Transformation* - 4D-AV mixing and filtering system, from any number of arbitrary data sources
* **F3:** *Analysis* - 4D-AV Analysis system to extract and embed higher-level data about a scene
* **F4:** *Recording* - 4D-AV recording and playback as a generic 4D-AV data source
* **F5:** *Streaming* - Lossless and lossy compression of 4D-AV for storage and streaming
......@@ -91,18 +91,26 @@ We will rely, at least initially, on OpenCV and the Point Cloud Library. It is a
* **3.1.1.2** Allow 4D scenes to be generated virtually from some scene description and animation
* **3.1.1.3** Support the translation of more traditional formats into our 4D-AV representation as a source
### 3.1.2 Mixing (F2)
### 3.1.2 Transformation (F2)
* **3.1.2.1** Merge multiple 4D-AV scenes into a single scene, with no restriction on the origin of the scene
* **3.1.2.2** Filter a scene to exclude all but a specified collection of entities
* **3.1.2.3** Filter a scene to exclude specific entities
* **3.1.2.4** Filter a scene by bounded region
* **3.1.2.5** Allow data channels or kinds of data to be filtered from the scene
* **3.1.2.5** Allow data channels or kinds of data to be excluded from the scene
* **3.1.2.6** Allow scene transformations (translation, scaling, rotation) when mixing
* **3.1.2.7** Any mixed scene output can also be an input to another mixing operation
* **3.1.2.8** Volumetric processing operations such as blur
* **3.1.2.9** Decompose a scene into discrete parts, perhaps forming a CSG graph or similar
### 3.1.3 Analysis (F3)
* **3.1.3.1** Identify primitive shapes and components of an image
* **3.1.3.2** Allow inter-frame correspondence of scene entities
* **3.1.3.3** Allow for semantic tagging of temporally persistent entities
* **3.1.3.4** Support motion analysis for gestures and body language especially
### 3.1.4 Recording (F4)
* **3.1.4.1** Allow a 4D-AV scene to be saved to disk
* **3.1.4.2** Allow a 4D-AV scene to be loaded and replayed from disk
### 3.1.5 Streaming (F5)
* **3.1.5.1** Support compressed adaptive bitrate streaming of 4D-AV scenes
......@@ -112,6 +120,8 @@ We will rely, at least initially, on OpenCV and the Point Cloud Library. It is a
* **3.1.5.5** Stream decoding must function within a web-browser
### 3.1.6 Presentation (F6)
* **3.1.6.1** Support web-browser interaction and visualisation with a scene
* **3.1.6.2** Support the use of AR/VR headsets for visualisation and interaction
### 3.1.7 Control (F7)
......@@ -121,6 +131,17 @@ We will rely, at least initially, on OpenCV and the Point Cloud Library. It is a
## 3.2 EXTERNAL INTERFACE REQUIREMENTS
### 3.2.1 Generation (F1)
* **3.2.1.1** Capture input from passive multi-view stereo cameras
* **3.2.1.2** Output from generators must be a standard representation (F8)
### 3.2.2 Streaming (F5)
* **3.2.2.1** Streams are to be accessible as a well documented web service
* **3.2.2.2** External stream sources can be incorporated into a scene
### 3.2.3 Presentation (F6)
## 3.3 NON-FUNCTIONAL REQUIREMENTS
......
......