ACC-HMS 2016 Workshop
  • Home
  • People
  • Schedule
  • Abstracts
Talks are sorted by title in alphabetical manner.

Co-Robotics for Off-road and Construction Equipment

Girish Chowdhary
Assistant Professor, University of Illinois at Urbana-Champaign
This talk presents recent advances human-robot collaborative learning and task execution, with specific applicational focus on construction co-robotics. In many engineering applications where the operations are repetitive with well-controlled surroundings, robots equipped with various sensing devices and control algorithms have replaced the need for using human operators. This has not been the case for real-world co-robots such as heavy construction and farming equipment; examples include excavators, tractors, backhoes, etc. When using these co-robots, safety-critical decisions have to be made in real-time under uncertain and dynamically changing environments. Although extensive work has been done to automate these co-robots and provide some autonomy based on sensing their surroundings, these efforts have not come to fruition because of the level of uncertainty surrounding a construction site, the uniqueness of each situation, and the safety-critical nature of the construction tasks. Real-world operation still depends greatly on skilled human operators. However, human operators of construction equipment require significant amount of operational training and real-world experience to parse the task at hand, make quick decisions in safety-critical situations, and efficiently control robot movements. This talk will present recent results from learning-from-demonstration algorithms, and human-machine interaction interfaces specifically designed to bridge the expert-novice skill gap. The presented algorithms learn complex task decompositions using demonstrations from expert operators of construction co-robots and utilize this learning to train and aid novice operators in executing complex tasks.

Flexible Human Machine Information Fusion

J.W. Curtis
​
Senior Research Engineer, Air Force Research Laboratory
​AFRL has developed technologies to allow very flexible fusion of human and machine perceptions in complex and dynamic environments.  The work leverages the  power of sample-based Bayesian filters and adds a method of encoding human tactile inputs as likelihood functions so as to represent both positive and negative detections while allowing for a variety of methods to properly represent human perceptual uncertainty.

Human Interaction with Complex and Autonomous Systems and Vehicles

Erin Solovey
Assistant Professor, Drexel University
​As computers, robots and vehicles become increasingly capable and autonomous, the role and expectations of the user in the system change and the user no longer needs to directly control all low-level aspects of the system. On the one hand, these advancements could allow people to work at a higher level and simultaneously supervise numerous tasks and systems. However, this can easily lead to increased cognitive demand from monitoring the vast amounts of data being generated, and from performing numerous simultaneous tasks. On the other hand, the high levels of autonomy can also lead to long duration, low workload situations, where little is required of the person, resulting in boredom and disengagement that can be detrimental when the person is later called upon to perform a task. In this talk, I will discuss work that aims to leverage recent advances in non-invasive brain-computer interfaces and physiological sensors to increase the effectiveness of human interaction with complex and autonomous systems by enabling them respond and adapt to the individual’s changing cognitive state. I will also discuss work looking at optimizing human-autonomy collaboration by better understanding the strengths and needs of the user in the system and designing better team structures and user interfaces.

Learning Representations and Algorithms for Human-Robot Interaction

Nicholas Roy
Associate Professor, Massachusetts Institute of Technology
​From the Kalman filter to stochastic motion graphs, the underlying representation has had more impact on robotics than the corresponding inference and planning algorithms. I will discuss some recent results in representations for human-robot interaction that can be learned from data, and describe some open challenges. 

Optimal User Attention Allocation in a Multi-tasking Environment

Amit Surana
United Technologies Research Center
The desire to invert human to machine ratio is resulting in increased levels of autonomy in military, homeland security and commercial applications, and consequently human operators are expected to increasingly play a supervisory role. Such a paradigm gives rise to a unique set of challenges related to operator workload management, and situational awareness which could deteriorate effective human machine teaming. Adaptive automation concepts provide a promising direction to alleviate such challenges. Along those lines, in this talk we will describe a novel model based framework for optimal operator attention allocation in multi-tasking environments. This framework is based on control theoretic/optimization based formulations and combines task specific cognitive models of human performance with real time psychophysiological data to assess operator state, and provide recommendations to enhance operator performance at multiple levels of decision making. We illustrate our framework in context of human supervisory control of multiple unmanned systems motivated from manned unmanned teaming and surveillance missions.
  • Home
  • People
  • Schedule
  • Abstracts