Organized by experts from across academia, industry, the Federal Labs, and SPIE, this meeting will highlight emerging capabilities in immersive technologies and degraded visual environments as critical enablers to future multi-domain operations.
This study explores two hypotheses about human-agent teaming: 1. Real-time coordination among a large set of autonomous robots can be achieved using predefined "plays" which define how to execute a task, and "audibles" which modify the play on the fly. 2. A spokesperson agent can serve as a representative for a group of robots, relaying information between the robots and human teammates. These hypotheses are tested in a simulated game environment: a human participant leads a search-and-rescue operation to evacuate a town threatened by an approaching wildfire, with the object of saving as many lives as possible. The participant communicates verbally with a virtual agent controlling a team of ten aerial robots and one ground vehicle, while observing a live map display with real-time location of the fire and identified survivors. Since full automation is not currently possible, two human controllers control the agent's speech and actions, and input parameters to the robots, which then operate autonomously until the parameters are changed. Designated plays include monitoring the spread of fire, searching for survivors, broadcasting warnings, guiding residents to safety, and sending the rescue vehicle. A successful evacuation of all the residents requires personal intervention in some cases (e.g., stubborn residents) while delegating other responsibilities to the spokesperson agent and robots, all in a rapidly changing scene. The study records the participants' verbal and nonverbal behavior in order to identify strategies people use when communicating with robotic swarms, and to collect data for eventual automation.
As part of the Institute for Creative Technologies and the School of Cinematic Arts at the University of Southern
California, the Mixed Reality lab develops technologies and techniques for presenting realistic immersive training
experiences. Such experiences typically place users within a complex ecology of social actors, physical objects, and
collections of intents, motivations, relationships, and other psychological constructs. Currently, it remains infeasible to
completely synthesize the interactivity and sensory signatures of such ecologies. For this reason, the lab advocates mixed
reality methods for training and conducts experiments exploring such methods. Currently, the lab focuses on
understanding and exploiting the elasticity of human perception with respect to representational differences between real
and virtual environments. This paper presents an overview of three projects: techniques for redirected walking, displays
for the representation of virtual humans, and audio processing to increase stress.
Proceedings Volume Editor (3)
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.