Objective 1: Integrated Information Visualization
Battlespace offers advanced visualization of battle-relevant information including the terrain, military units, targets, sensor data, weather and cultural information. All available information will be consolidated into a single coherent picture that can be viewed from multiple perspectives and scales to maximize situational awareness.
Objective 2: Multi-Modal Interfaces
Battlespace will optimize the human computer interface to maximize the ability of the operator to control multiple unmanned aerial and ground vehicles. Advanced interface technologies such as speech recognition, speech synthesis, eye tracking, gesture recognition, and brain-computer interfaces will be investigated. Personalized information displays will be created using heads-up display technology that overlays task-specific information to individual users directly on the shared view of the immersive virtual battlespace. The ability of multiple operators to simultaneously operate within the shared environment will be investigated.
Objective 3: Iterative Interface Design
Interface design will be evaluated in user studies using gameplay testing methodologies. Realistic command and control scenarios will require users to compete for control of a single virtual battlespace. Interface components will be varied and task performance will be measured and compared. An iterative design approach will be adopted to optimize the interface.
Objective 4: Use Real-world Military Training Scenarios
Battlespace will enable the integration of live and virtual training environments. Our relationship with the Iowa National guard at Camp Dodge will enable collaboration via their MOUT (Military Operations in Urbanized Terrain) facility as a testing environment.
Objective 5: Collaboration Across Systems and Simulators
Battlespace will explore methods for integrating multiple geographically distributed simulators and related visualization systems to facilitate training for joint armed forces. One of the greatest challenges of current engagements is the coordinated management of autonomous, semi-autonomous, and human assets across multiple branches of the military. We will develop a network-enabled simulation platform that supports smooth visual scale dilation from high (theater-level) to low (street-level) to support sensor-enhanced situational awareness at all levels of command and control.