Di Fu, Ph.D.


Curriculum vitae



Department of Informatics

University of Hamburg, Germany



A trained humanoid robot can perform human-like crossmodal social attention and conflict resolution.


Journal article


Di Fu*, Fares Abawi, Hugo Carneiro, Matthias Kerzel, Ziwei Chen, Erik Strahl, Xun Liu*, Stefan Wermter, (*Co-corresponding author)
International Journal of Social Robotics

DOI: 10.1007/s12369-023-00993-3

View PDF
Cite

Cite

APA   Click to copy
Fu*, D., Abawi, F., Carneiro, H., Kerzel, M., Chen, Z., Strahl, E., … author), (*C.-corresponding. A trained humanoid robot can perform human-like crossmodal social attention and conflict resolution. International Journal of Social Robotics. https://doi.org/10.1007/s12369-023-00993-3


Chicago/Turabian   Click to copy
Fu*, Di, Fares Abawi, Hugo Carneiro, Matthias Kerzel, Ziwei Chen, Erik Strahl, Xun Liu*, Stefan Wermter, and (*Co-corresponding author). “A Trained Humanoid Robot Can Perform Human-like Crossmodal Social Attention and Conflict Resolution.” International Journal of Social Robotics (n.d.).


MLA   Click to copy
Fu*, Di, et al. “A Trained Humanoid Robot Can Perform Human-like Crossmodal Social Attention and Conflict Resolution.” International Journal of Social Robotics, doi:10.1007/s12369-023-00993-3 .


BibTeX   Click to copy

@article{di-a,
  title = {A trained humanoid robot can perform human-like crossmodal social attention and conflict resolution.},
  journal = {International Journal of Social Robotics},
  doi = {10.1007/s12369-023-00993-3 },
  author = {Fu*, Di and Abawi, Fares and Carneiro, Hugo and Kerzel, Matthias and Chen, Ziwei and Strahl, Erik and Liu*, Xun and Wermter, Stefan and author), (*Co-corresponding}
}

To enhance human-robot social interaction, it is essential for robots to process multiple social cues in a complex real-world environment. However, incongruency of input information across modalities is inevitable and could be challenging for robots to process. To tackle this challenge, our study adopted the neurorobotic paradigm of crossmodal conflict resolution to make a robot express human-like social attention. A behavioural experiment was conducted on 37 participants for the human study. We designed a round-table meeting scenario with three animated avatars to improve ecological validity. Each avatar wore a medical mask to obscure the facial cues of the nose, mouth, and jaw. The central avatar shifted its eye gaze while the peripheral avatars generated sound. Gaze direction and sound locations were either spatially congruent or incongruent. We observed that the central avatar’s dynamic gaze could trigger crossmodal social attention responses. In particular, human performance was better under the congruent audio-visual condition than the incongruent condition. Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study. After mounting the trained model on the iCub, the robot was exposed to laboratory conditions similar to the human experiment. While the human performance was overall superior, our trained model demonstrated that it could replicate attention responses similar to humans. 


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in