Human-Trust in Robots

Investigator: Harold Soh.

Trust is crucial in shaping human interactions with one another and with robots.

We are currently exploring how humans develop trust in robots and how we can use predictive models of trust to enable better human-robot collaboration. For example, we’ve performed a human-subject study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers.

Based on our findings, we developed new differentiable models of trust evolution and transfer via latent task representations: (i) a rational Bayes model, (ii) a data-driven neural network model, and (iii) a hybrid model that combines the two.

Experiments show that the proposed models outperform prevailing models when predicting trust over unseen tasks and users.

Further Reading:

  • Multi-Task Trust Transfer for Human Robot Interaction, Harold Soh, Yaqi Xie, Min Chen, and David Hsu, International Journal of Robotics Research (IJRR), 2019
  • The Transfer of Human Trust in Robot Capabilities across Tasks, Harold Soh, Pan Shu, Min Chen, and David Hsu. Robotics Science and Systems (RSS), 2018 (Finalist for Best Paper Award )
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google