A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 7 Issue 6
Oct.  2020

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Zhihao Shen, Armagan Elibol and Nak Young Chong, "Understanding Nonverbal Communication Cues of Human Personality Traits in Human-Robot Interaction," IEEE/CAA J. Autom. Sinica, vol. 7, no. 6, pp. 1465-1477, Nov. 2020. doi: 10.1109/JAS.2020.1003201
Citation: Zhihao Shen, Armagan Elibol and Nak Young Chong, "Understanding Nonverbal Communication Cues of Human Personality Traits in Human-Robot Interaction," IEEE/CAA J. Autom. Sinica, vol. 7, no. 6, pp. 1465-1477, Nov. 2020. doi: 10.1109/JAS.2020.1003201

Understanding Nonverbal Communication Cues of Human Personality Traits in Human-Robot Interaction

doi: 10.1109/JAS.2020.1003201
Funds:  This work was supported by the EU-Japan coordinated R&D project on “Culture Aware Robots and Environmental Sensor Systems for Elderly Support,” commissioned by the Ministry of Internal Affairs and Communications of Japan and EC Horizon 2020 Research and Innovation Programme (737858). The authors are also grateful for financial supports from the Air Force Office of Scientific Research (AFOSR-AOARD/FA2386-19-1-4015)
More Information
  • With the increasing presence of robots in our daily life, there is a strong need and demand for the strategies to acquire a high quality interaction between robots and users by enabling robots to understand users’ mood, intention, and other aspects. During human-human interaction, personality traits have an important influence on human behavior, decision, mood, and many others. Therefore, we propose an efficient computational framework to endow the robot with the capability of under-standing the user’s personality traits based on the user’s nonverbal communication cues represented by three visual features including the head motion, gaze, and body motion energy, and three vocal features including voice pitch, voice energy, and mel-frequency cepstral coefficient (MFCC). We used the Pepper robot in this study as a communication robot to interact with each participant by asking questions, and meanwhile, the robot extracts the nonverbal features from each participant’s habitual behavior using its on-board sensors. On the other hand, each participant’s personality traits are evaluated with a questionnaire. We then train the ridge regression and linear support vector machine (SVM) classifiers using the nonverbal features and personality trait labels from a questionnaire and evaluate the performance of the classifiers. We have verified the validity of the proposed models that showed promising binary classification performance on recognizing each of the Big Five personality traits of the participants based on individual differences in nonverbal communication cues.

     

  • loading
  • [1]
    T. Minato, M. Shimada, H. Ishiguro, and S. Itakura, “Development of an android robot for studying human-robot interaction,” in Proc. 17th Int. Conf. Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, Ottawa, Canada, 2004, pp. 424–434.
    [2]
    S. Woods, K. Dautenhahn, and J. Schulz, “The design space of robots: Investigating children’s views,” in Proc. 13th IEEE Int. Workshop on Robot and Human Interactive Communication, Kurashiki, Japan, 2004, pp. 47–52.
    [3]
    V. Ng-Thow-Hing, P. C. Luo, and S. Okita, “Synchronized gesture and speech production for humanoid robots,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Taipei, China, 2010, pp. 4617–4624.
    [4]
    D. McNeill, Language And Gesture. Cambridge, Britain: Cambridge University Press, 2000.
    [5]
    S. Kopp, B. Krenn, S. Marsella, A. N. Marshall, C. Pelachaud, H. Pirker, K. Thórisson, and H. Vilhjálmsson, “Towards a common framework for multimodal generation: The behavior markup language,” in Proc. 6th Intelligent Virtual Agents, Marina Del Rey, USA, 2006, pp. 205–217.
    [6]
    S. Kopp, K. Bergmann, and I. Wachsmuth, “Multimodal communication from multimodal thinking–towards an integrated model of speech and gesture production,” Int. J. Semantic Comput., vol. 2, no. 1, pp. 115–136, 2008. doi: 10.1142/S1793351X08000361
    [7]
    B. Bruno, N. Y. Chong, H. Kamide, S. Kanoria, J. Lee, Y. Lim, A. K. Pandey, C. Papadopoulos, I. Papadopoulos, F. Pecora, A. Saffiotti, and A. Sgorbissa, “Paving the way for culturally competent robots: A position paper,” in Proc. 26th IEEE Int. Symp. Robot and Human Interactive Communication, Lisbon, Portugal, 2017, pp. 553–560.
    [8]
    N. T. V. Tuyen, S. Jeong, and N. Y. Chong, “Emotional bodily expressions for culturally competent robots through long term human-robot interaction,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Madrid, Spain, 2018, pp. 2008–2013.
    [9]
    S. Saunderson and G. Nejat, “How robots influence humans: A survey of nonverbal communication in social human–robot interaction,” Int. J. Social Robot., vol. 11, no. 4, pp. 575–608, Jan. 2019. doi: 10.1007/s12369-019-00523-0
    [10]
    A. Aly and A. Tapus, “A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human-robot interaction,” in Proc. ACM/IEEE Int. Conf. Human-Robot Interaction, Tokyo, Japan, 2013, pp. 325–332.
    [11]
    R. Hogan, J. Johnson, and S. Briggs, Handbook of Personality Psychology. San Diego, USA: Academic Press, 1997.
    [12]
    J. E. Lydon, D. W. Jamieson, and M. P. Zanna, “Interpersonal similarity and the social and intellectual dimensions of first impressions,” Soc. Cognit., vol. 6, no. 4, pp. 269–286, Dec. 1988. doi: 10.1521/soco.1988.6.4.269
    [13]
    C. A. Reid, J. D. Green, and J. L. Davis, “Attitude alignment increases trust, respect, and perceived reasoning ability to produce attraction,” Person. Rel., vol. 25, no. 2, pp. 171–189, Jun. 2018. doi: 10.1111/pere.12237
    [14]
    K. Isbister and C. Nass, “Consistency of personality in interactive characters: Verbal cues, non-verbal cues, and user characteristics,” Int. J. Human-Comput. Stud., vol. 53, no. 2, pp. 251–267, Aug. 2000. doi: 10.1006/ijhc.2000.0368
    [15]
    C. Nass and M. K. Lee, “Does computer-synthesized speech manifest personality? Experimental tests of recognition, similarity-attraction, and consistency-attraction” J. Exp. Psychol.:Appl., vol. 7, no. 2, pp. 171–181, 2001.
    [16]
    A. Rossi, K. Dautenhahn, K. Lee Koay, and M. L. Walters, “The impact of peoples’ personal dispositions and personalities on their trust of robots in an emergency scenario,” Paladyn,J. Behav. Robot., vol. 9, no. 1, pp. 137–154, Jul. 2018. doi: 10.1515/pjbr-2018-0010
    [17]
    J. Cassell and J. Bickmore, “Negotiated collusion: Modeling social language and its relationship effects in intelligent agents,” User Model. User-Adap. Inter., vol. 13, no. 1-2, pp. 89–132, Feb. 2003.
    [18]
    S. Woods, K. Dautenhahn, C. Kaouri, R. te Boekhorst, K. Lee Koay, and M. L. Walters, “Are robots like people? Relationships between participant and robot personality traits in human-robot interaction studies” Interact. Stud., vol. 8, no. 2, pp. 281–305, Jan. 2007. doi: 10.1075/is.8.2.06woo
    [19]
    M. R. Barrick and M. K. Mount, “The big five personality dimensions and job performance: A meta-analysis,” Person. Psychol., vol. 44, pp. 1–26, Mar. 1991. doi: 10.1111/j.1744-6570.1991.tb00688.x
    [20]
    A. Aly and A. Tapus, “Towards an intelligent system for generating an adapted verbal and nonverbal combined behavior in human-robot interaction,” Auton. Robots, vol. 40, no. 2, pp. 193–209, 2016. doi: 10.1007/s10514-015-9444-1
    [21]
    K. M. Lee, W. Peng, S. A. Jin, and C. Yan, “Can robots manifest personality? An empirical test of personality recognition, social responses, and social presence in human-robot interaction” J. Commun., vol. 56, no. 4, pp. 754–772, Dec. 2006. doi: 10.1111/j.1460-2466.2006.00318.x
    [22]
    E. Park, D. Jin, and A. P. del Pobil, “The law of attraction in human-robot interaction,” Int. J. Adv. Robot. Syst., vol. 9, no. 2, pp. 1–7, Jan. 2012.
    [23]
    H. Salam, O. Çeliktutan, I. Hupont, H. Gunes, and M. Chetouani, “Fully automatic analysis of engagement and its relationship to personality in human-robot interactions,” IEEE Access, vol. 5, pp. 705–721, 2017. doi: 10.1109/ACCESS.2016.2614525
    [24]
    H. Nakajima, R. Yamada, S. Brave, Y. Morishima, C. Nass, and S. Kawaji, “The functionality of human-machine collaboration systems-mind model and social behavior,” in Proc. IEEE Int. Conf. Systems, Man and Cybernetics, Washington DC, USA, 2003, pp. 2381–2387.
    [25]
    W. Revelle and K. R. Scherer, “Personality and emotion,” in Oxford Companion to the Affective Sciences, D. Sander and K. R. Scherer, Eds. Oxford, Britain: Oxford University Press, 2009, pp. 1–4.
    [26]
    Z. H. Shen, A. Elibol, and N. Y. Chong, “Inferring human personality traits in human-robot social interaction,” in Proc. 14th ACM/IEEE Int. Conf. Human-Robot Interaction, Daegu, Korea (South), 2019, pp. 578–579.
    [27]
    NaoQi documentation center. [Online]. Available: http://doc.aldebaran.com/2-5/home_pepper.html. Accessed: Jul. 29, 2019.
    [28]
    J. Kickul and G. Neuman, “Emergent leadership behaviors: The function of personality and cognitive ability in determining teamwork performance and KSAs,” J. Business Psychol., vol. 15, no. 1, pp. 27–51, Sept. 2000. doi: 10.1023/A:1007714801558
    [29]
    O. Celiktutan and H. Gunes, “Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience,” in Proc. 24th IEEE Int. Symp. Robot and Human Interactive Communication, Kobe, Japan, pp. 815–820.
    [30]
    L. W. Morris, Extraversion and Introversion-An Interactional Perspective, Washington, USA: Hemisphere Publishing Corporation, 1979.
    [31]
    A. Tapus and M. J. Mataric, “Socially assistive robots: The link between personality, empathy, physiological signals, and task performance,” in Proc. AAAI Spring Symp. Emotion, Personality and Social Behavior, Stanford, California, USA, 2008, pp. 133–140.
    [32]
    I. B. Myers and P. B. Myers, Gifts Differing: Understanding Personality Type, California, USA: Davies-Black Publishing, 1980.
    [33]
    H. J. Eysenck, “Dimensions of personality: 16, 5 or 3? Criteria for a taxonomic paradigm” Person. Individ. Diff., vol. 12, no. 8, pp. 773–790, 1991. doi: 10.1016/0191-8869(91)90144-Z
    [34]
    L. R. Goldberg, “An alternative ‘description of personality’: The big-five factor structure,” Personality and Social Psychology, vol. 59, no. 6, pp. 1216–1229, 1990. doi: 10.1037/0022-3514.59.6.1216
    [35]
    L. R. Goldberg, “A broad-bandwidth, public-domain, personality inventory measuring the lower-level facets of several five-factor models,” in Personality Psychology in Europe, I. Mervielde, I. Deary, F. De Fruyt, and F. Ostendorf, Eds. Tilburg, The Netherlands: Tilburg University Press, 1999, pp. 7–28, 1999.
    [36]
    G. Mohammadi, A. Vinciarelli, and M. Mortillaro, “The voice of personality: Mapping nonverbal vocal behavior into trait attributions,” in Proc. 2nd Int. Workshop on Social Signal Processing, Firenze, Italy, 2010, pp. 17–20.
    [37]
    F. Pianesi, N. Mana, A. Cappelletti, B. Lepri, and M. Zancanaro, “Multimodal recognition of personality traits in social interactions,” in Proc. 10th Int. Conf. Multimodal Interfaces, Chania, Greece, 2008, pp. 53–60.
    [38]
    A. Vinciarelli and G. Mohammadi, “A survey of personality computing,” IEEE Trans. Affect. Comput., vol. 5, no. 3, pp. 273–291, Jul.–Sept. 2014. doi: 10.1109/TAFFC.2014.2330816
    [39]
    S. D. Gosling, P. J. Rentfrow, and W. B. Swann Jr, “A very brief measure of the big-five personality domains,” J. Res. Person., vol. 37, no. 6, pp. 504–528, Dec. 2003. doi: 10.1016/S0092-6566(03)00046-1
    [40]
    P. T. Costa Jr and R. R. McCrae, “Revised NEO personality inventory (NEO-PI-R) and NEO five-factor inventory (NEO-FFI): Professional manual,” Odessa, Ukraine: Psychological Assessment Resources, 1992.
    [41]
    R. R. McCrae and P. T. Costa Jr, “A contemplated revision of the neo five-factor inventory,” Person. Ind. Diff., vol. 36, no. 3, pp. 587–596, Feb. 2004. doi: 10.1016/S0191-8869(03)00118-1
    [42]
    L. R. Goldberg, “The development of markers for the big-five factor structure,” Psychol. Assess., vol. 4, no. 1, pp. 26–42, 1992. doi: 10.1037/1040-3590.4.1.26
    [43]
    A. Furnham, “Language and personality,” in Handbook of Language and Social Psychology, H. Giles and W. Robinson Eds. New York, USA: Wiley, 1990.
    [44]
    J. M. Dewaele and A. Furnham, “Extraversion: The unloved variable in applied linguistic research,” Lang. Learn., vol. 49, no. 3, pp. 509–544, Sept. 1999. doi: 10.1111/0023-8333.00098
    [45]
    R. Hassin and Y. Trope, “Facing faces: Studies on the cognitive aspects of physiognomy,” Person. Soc. Psychol., vol. 78, no. 5, pp. 837–852, 2000. doi: 10.1037/0022-3514.78.5.837
    [46]
    M. R. Mehl, S. D. Gosling, and J. W. Pennebaker, “Personality in its natural habitat: Manifestations and implicit folk theories of personality in daily life,” Person. Soc. Psychol., vol. 90, no. 5, pp. 862–877, 2006. doi: 10.1037/0022-3514.90.5.862
    [47]
    J. W. Pennebaker and L. A. King, “Linguistic styles: Language use as an individual difference,” Person. Soc. Psychol., vol. 77, no. 6, pp. 1296–1312, 1999. doi: 10.1037/0022-3514.77.6.1296
    [48]
    F. Mairesse and M. Walker, “Words mark the nerds: Computational models of personality recognition through language,” in Proc. 28th Annu. Conf. Cognitive Science Society, Vancouver, 2006, pp. 543–548.
    [49]
    O. Aran and D. Gatica-Perez, “One of a kind: Inferring personality impressions in meetings,” in Proc. ACM on Int. Conf. Multimodal Interaction, Sydney, Australia, 2013, pp. 11–18.
    [50]
    G. M. Lucas, J. Boberg, D. Traum, R. Artstein, J. Gratch, A. Gainer, E. Johnson, A. Leuski, and M. Nakano, “Getting to Know each other: The role of social dialogue in recovery from errors in social robots,” in Proc. ACM/IEEE Int. Conf. Human-Robot Interaction, Chicago, USA: ACM, 2018, pp. 344–351.
    [51]
    P. Patompak, S. Jeong, I. Nilkhamhang, and N. Y. Chong, “Learning proxemics for personalized human-robot social interaction,” Int. J. Soc. Robot., vol. 12, no. 1, pp. 267–280, May 2019.
    [52]
    S. M. Anzalone, G. Varni, S. Ivaldi, and M. Chetouani, “Automated prediction of extraversion during human–humanoid interaction,” Int. J. Soc. Robot., vol. 9, no. 3, pp. 385–399, 2017. doi: 10.1007/s12369-017-0399-6
    [53]
    Z. Zafar, S. H. Paplu, and K. Berns, “Automatic assessment of human personality traits: A step towards intelligent human-robot interaction,” in Proc. IEEE-RAS 18th Int. Conf. Humanoid Robots, Beijing, China: IEEE, 2018, pp. 1–9.
    [54]
    J. T. Webb, “Interview synchrony: An investigation of two speech rate measures in an automated standardized interview,” in Studies in Dyadic Communication, A. W. Siegman and B. Pope, Eds. Elmsford, USA: Pergamon Press, 1972, pp. 115–133.
    [55]
    A. Guidi, C. Gentili, E. P. Scilingo, and N. Vanello, “Analysis of speech features and personality traits,” Biomed. Signal Process. Control, vol. 51, pp. 1–7, May 2019. doi: 10.1016/j.bspc.2019.01.027
    [56]
    S. Okada, O. Aran, and D. Gatica-Perez, “Personality trait classification via co-Occurrent multiparty multimodal event discovery,” in Proc. ACM on Int. Conf. Multimodal Interaction, Washington, USA, 2015, pp. 15–22.
    [57]
    D. Gatica-Perez, D. Sanchez-Cortes, T. M. T. Do, D. B. Jayagopi, and K. Otsuka, “Vlogging over time: Longitudinal impressions and behavior in YouTube,” in Proc. 17th Int. Conf. Mobile and Ubiquitous Multimedia, Cairo, Egypt, 2018, pp. 37–46.
    [58]
    O. Kampman, E. J. Barezi, D. Bertero, and P. Fung, “Investigating audio, video, and text fusion methods for end-to-end automatic personality prediction,” in Proc. 56th Annu. Meeting of the Association for Computational Linguistics, Melbourne, Australia, 2018, pp. 606–611.
    [59]
    R. D. P. Principi, C. Palmero, J. C. Junior, and S. Escalera, “On the effect of observed subject biases in apparent personality analysis from audio-visual signals,” IEEE Tran. Affect. Comput., Nov. 2019. DOI: 10.1109/TAFFC.2019.2956030
    [60]
    D. Sanchez-Cortes, O. Aran, M. S. Mast, and D. Gatica-Perez, “A nonverbal behavior approach to identify emergent leaders in small groups,” IEEE Trans. Multimed., vol. 14, no. 3, pp. 816–832, Jun. 2012. doi: 10.1109/TMM.2011.2181941
    [61]
    D. Sanchez-Cortes, O. Aran, D. B. Jayagopi, M. S. Mast, and D. Gatica-Perez, “Emergent leaders through looking and speaking: From audio-visual data to multimodal recognition,” J. Multimod. User Interf., vol. 7, no. 1-2, pp. 39–53, Aug. 2013. doi: 10.1007/s12193-012-0101-0
    [62]
    C. S. J. Junior, Y. Güçlütürk, M. Pérez, U. Güçlü, C. Andujar, X. Baró, H. J. Escalante, I. Guyon, M. A. J. van Gerven, R. van Lier, and S. Escalera, “First impressions: A survey on vision-based apparent personality trait analysis,” IEEE Trans. Affect. Comput., Jul. 2019. DOI: 10.1109/TAFFC.2019.2930058
    [63]
    C. Beyan, F. Capozzi, C. Becchio, and V. Murino, “Prediction of the leadership style of an emergent leader using audio and visual nonverbal features,” IEEE Trans. Multimed., vol. 20, no. 2, pp. 441–456, Feb. 2018. doi: 10.1109/TMM.2017.2740062
    [64]
    W. F. Hsieh, Y. D. Li, E. Kasano, E. S. Simokawara, and T. Yamaguchi, “Confidence identification based on the combination of verbal and non-verbal factors in human robot interaction,” in Proc. Int. Joint Conf. Neural Networks, Budapest, Hungary, 2019, pp. 1–7.
    [65]
    B. Wu, H. Z. Ai, C. Huang, and S. H. Lao, “Fast rotation invariant multi-view face detection based on Real AdaBoost,” in Proc. 6th IEEE Int. Conf. Automatic Face and Gesture Recognition, Seoul, South Korea, 2004, pp. 79–84.
    [66]
    M. L. Knapp, J. A. Hall, and T. G. Horgan, Nonverbal Communication in Human Interaction, 8th ed. Boston, USA: Cengage Learning, 2013.
    [67]
    R. Stiefelhagen and J. Zhu, “Head orientation and gaze direction in meetings,” in Proc. CHI Extended Abstracts on Human Factors in Computing Systems, Minneapolis, USA, 2002, pp. 858–859.
    [68]
    E. Ricci and J. M. Odobez, “Learning large margin likelihoods for realtime head pose tracking,” in Proc. 16th IEEE Int. Conf. Image Processing, Cairo, Egypt, 2009.
    [69]
    J. W. Davis and A. F. Bobick, “The representation and recognition of human movement using temporal templates,” in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, San Juan, USA, 1997, pp. 928–934.
    [70]
    A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates,” IEEE Trans. Pattern Anal. Machine Intell., vol. 23, no. 3, pp. 257–267, Mar. 2001. doi: 10.1109/34.910878
    [71]
    M. Ross, H. Shaffer, A. Cohen, R. Freudberg, and H. Manley, “Average magnitude difference function pitch extractor,” IEEE Trans. Acoustics,Speech,Signal Process, vol. ASSP-22, no. 5, pp. 353–362, Oct. 1974.
    [72]
    D. Markel, “The SIFT algorithm for fundamental frequency estimation,” IEEE Trans. Audio Electroacoust., vol. AU-20, no. 5, pp. 367–377, Dec. 1972.
    [73]
    X. D. Mei, J. Pan, and S. H. Sun, “Efficient algorithms for speech pitch estimation,” in Proc. Int. Symp. Intelligent Multimedia, Video and Speech Processing, Hong Kong, China, 2001, pp. 421–424.
    [74]
    A. M. Noll, “Pitch determination of human speech by the harmonic product spectrum, the harmonic sum spectrum, and a maximum likelihood estimate,” in Proc. Symp. Computer Processing in Communications, Brooklyn, USA, 1969, pp. 779–797.
    [75]
    M. Xu, L. Y. Duan, J. F. Cai, L. T. Chia, C. S. Xu, and Q. Tian, “HMM-based audio keyword generation,” in Advances in Multimedia Information Processing, K. Aizawa, Y. Nakamura, and S. Satoh, Eds. Berlin, Heidelberg: Springer, 2004, pp. 566–574.
    [76]
    Sahidullah and G. Saha, “Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition,” Speech Commun., vol. 54, no. 4, pp. 543–565, May 2012. doi: 10.1016/j.specom.2011.11.004
    [77]
    C. Beyan, V. Katsageorgiou, and V. Murino, “A sequential data analysis approach to detect emergent leaders in small groups,” IEEE Trans. Multimedia, vol. 21, no. 8, pp. 2107–2116, 2019.
    [78]
    S. Feese, B. Arnrich, G. Tröster, B. Meyer, and K. Jonas, “Quantifying behavioral mimicry by automatic detection of nonverbal cues from body motion,” in Proc. Int. Conf. Privacy, Security, Risk and Trust and Int. Conf. Social Computing, Amsterdam, Netherlands, 2012, pp. 520–525.
    [79]
    C. Jie and P. Peng, “Recognize the most dominant person in multi-party meetings using nontraditional features,” in Proc. IEEE Int. Conf. Intelligent Computing and Intelligent Systems, Xiamen, China, 2010, pp. 312–316
    [80]
    D. Sanchez-Cortes, D. B. Jayagopi, and D. Gatica-Perez, “Predicting remote versus collocated group interactions using nonverbal cues,” in Proc. Workshop on Multimodal Sensor-Based Systems and Mobile Phones for Social Computing, Cambridge, Britain, 2009.
    [81]
    C. Beyan, F. Capozzi, C. Becchio, and V. Murino, “Identification of emergent leaders in a meeting scenario using multiple kernel learning,” in Proc. 2nd Workshop on Advancements in Social Signal Processing for Multimodal Interaction, Tokyo Japan, 2016, pp. 3–10.
    [82]
    C. M. Bishop, Pattern Recognition and Machine Learning, New York, USA: Springer-Verlag, 2006.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(5)

    Article Metrics

    Article views (1533) PDF downloads(128) Cited by()

    Highlights

    • Toward understanding nonverbal cues and signals in human-robot social interaction.
    • A face-to-face human robot interaction from the robot's first-person perspective.
    • A new computational framework for enabling social robots to understand human personality traits.
    • Performance evaluation of different nonverbal social cues using machine learning techniques.

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return