Difference between revisions of "Evaluation of WikiTalk - User studies of human-robot interaction"

From WikiPapers
Jump to: navigation, search
(CSV import from another resource for wiki stuff; all data is PD-ineligible, abstracts quoted under quotation right. Skipping when title already exists. Sorry for authors and references to be postprocessed, please edit and create redirects.)
 
(+WikiTalk)
 
Line 1: Line 1:
 
{{Infobox Publication
 
{{Infobox Publication
|authors=Anastasiou D., Jokinen K., Wilcock G.
+
|type=conference paper
 
|title=Evaluation of WikiTalk - User studies of human-robot interaction
 
|title=Evaluation of WikiTalk - User studies of human-robot interaction
|date=2013
+
|authors=Anastasiou D., Jokinen K., Wilcock G.
 
|publishedin=Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
 
|publishedin=Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
 +
|date=2013
 
|volume=8007 LNCS
 
|volume=8007 LNCS
 
|issue=PART 4
 
|issue=PART 4
|page-start=32
 
|page-end=42
 
|abstract=The paper concerns the evaluation of Nao WikiTalk, an application that enables a Nao robot to serve as a spoken open-domain knowledge access system. With Nao WikiTalk the robot can talk about any topic the user is interested in, using Wikipedia as its knowledge source. The robot suggests some topics to start with, and the user shifts to related topics by speaking their names after the robot mentions them. The user can also switch to a totally new topic by spelling the first few letters. As well as speaking, the robot uses gestures, nods and other multimodal signals to enable clear and rich interaction. The paper describes the setup of the user studies and reports on the evaluation of the application, based on various factors reported by the 12 users who participated. The study compared the users' expectations of the robot interaction with their actual experience of the interaction. We found that the users were impressed by the lively appearance and natural gesturing of the robot, although in many respects they had higher expectations regarding the robot's presentation capabilities. However, the results are positive enough to encourage research on these lines.
 
 
|keywords=Evaluation, gesturing, multimodal human-robot interaction, Wikipedia
 
|keywords=Evaluation, gesturing, multimodal human-robot interaction, Wikipedia
|extrakeywords=Access system, Evaluation, gesturing, Knowledge sources, Multi-modal, Robot interactions, User study, Wikipedia, Human robot interaction, Man machine systems, Human computer interaction
+
|extrakeywords=Access system, Evaluation, gesturing, Knowledge sources, Multi-modal, Robot interactions, User study, Wikipedia, Human robot interaction, Man machine systems, Human computer interaction, WikiTalk,
 +
|language=English
 +
|abstract=The paper concerns the evaluation of Nao WikiTalk, an application that enables a Nao robot to serve as a spoken open-domain knowledge access system. With Nao WikiTalk the robot can talk about any topic the user is interested in, using Wikipedia as its knowledge source. The robot suggests some topics to start with, and the user shifts to related topics by speaking their names after the robot mentions them. The user can also switch to a totally new topic by spelling the first few letters. As well as speaking, the robot uses gestures, nods and other multimodal signals to enable clear and rich interaction. The paper describes the setup of the user studies and reports on the evaluation of the application, based on various factors reported by the 12 users who participated. The study compared the users' expectations of the robot interaction with their actual experience of the interaction. We found that the users were impressed by the lively appearance and natural gesturing of the robot, although in many respects they had higher expectations regarding the robot's presentation capabilities. However, the results are positive enough to encourage research on these lines.
 
|references={{reference|full=Norman, D.A., (1988) The Psychology of Everyday Things, , Basic Books, New York}}    {{reference|full= Jokinen, K., Rational communication and affordable natural language interaction for ambient environments (2010) LNCS, 6392, pp. 163-168. , Lee, G.G., Mariani, J., Minker, W., Nakamura, S. (eds.) IWSDS 2010. Springer, Heidelberg}}    {{reference|full= Jokinen, K., McTear, M., Spoken Dialogue Systems (2009) Synthesis Lectures on Human Language Technologies, 2 (1). , Morgan & Claypool}}    {{reference|full= Jokinen, K., Wilcock, G., Constructive interaction for talking about interesting topics (2012) Proceedings of Eigth Language Resources and Evaluation Conference (LREC 2012), Istanbul, pp. 404-410}}    {{reference|full= Csapo, A., Gilmartin, E., Grizou, J., Han, J., Meena, R., Anastasiou, D., Jokinen, K., Wilcock, G., Multimodal conversational interaction with a humanoid robot (2012) Proceedings of 3rd IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2012), Kosice, pp. 667-672}}    {{reference|full= Han, J., Campbell, N., Jokinen, K., Wilcock, G., Integrating the use of non-verbal cues in human-robot interaction with a Nao robot (2012) Proceedings of 3rd IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2012), pp. 679-683. , Kosice}}    {{reference|full= Meena, R., Jokinen, K., Wilcock, G., Integration of gestures and speech in humanrobot interaction (2012) Proceedings of 3rd IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2012), pp. 673-678. , Kosice}}    {{reference|full= Kendon, A., (2005) Gesture. Visible Action As Utterance, , Cambridge University Press, Cambridge}}    {{reference|full= Jokinen, K., Pointing gestures and synchronous communication management (2010) LNCS, 5967, pp. 33-49. , Esposito, A., Campbell, N., Vogel, C., Hussain, A., Nijholt, A. (eds.) Second COST 2102. Springer, Heidelberg}}    {{reference|full= Jokinen, K., Hurtig, T., User expectations and real experience on a multimodal interactive system Proceedings of 9th International Conference on Spoken Language Processing (Interspeech 2006), Pittsburgh (2006)}}    {{reference|full= Wilcock, G., WikiTalk: A spoken Wikipedia-based open-domain knowledge access system (2012) Proceedings of the COLING-2012 Workshop on Question Answering for Complex Domains, Mumbai, pp. 57-69}}
 
|references={{reference|full=Norman, D.A., (1988) The Psychology of Everyday Things, , Basic Books, New York}}    {{reference|full= Jokinen, K., Rational communication and affordable natural language interaction for ambient environments (2010) LNCS, 6392, pp. 163-168. , Lee, G.G., Mariani, J., Minker, W., Nakamura, S. (eds.) IWSDS 2010. Springer, Heidelberg}}    {{reference|full= Jokinen, K., McTear, M., Spoken Dialogue Systems (2009) Synthesis Lectures on Human Language Technologies, 2 (1). , Morgan & Claypool}}    {{reference|full= Jokinen, K., Wilcock, G., Constructive interaction for talking about interesting topics (2012) Proceedings of Eigth Language Resources and Evaluation Conference (LREC 2012), Istanbul, pp. 404-410}}    {{reference|full= Csapo, A., Gilmartin, E., Grizou, J., Han, J., Meena, R., Anastasiou, D., Jokinen, K., Wilcock, G., Multimodal conversational interaction with a humanoid robot (2012) Proceedings of 3rd IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2012), Kosice, pp. 667-672}}    {{reference|full= Han, J., Campbell, N., Jokinen, K., Wilcock, G., Integrating the use of non-verbal cues in human-robot interaction with a Nao robot (2012) Proceedings of 3rd IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2012), pp. 679-683. , Kosice}}    {{reference|full= Meena, R., Jokinen, K., Wilcock, G., Integration of gestures and speech in humanrobot interaction (2012) Proceedings of 3rd IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2012), pp. 673-678. , Kosice}}    {{reference|full= Kendon, A., (2005) Gesture. Visible Action As Utterance, , Cambridge University Press, Cambridge}}    {{reference|full= Jokinen, K., Pointing gestures and synchronous communication management (2010) LNCS, 5967, pp. 33-49. , Esposito, A., Campbell, N., Vogel, C., Hussain, A., Nijholt, A. (eds.) Second COST 2102. Springer, Heidelberg}}    {{reference|full= Jokinen, K., Hurtig, T., User expectations and real experience on a multimodal interactive system Proceedings of 9th International Conference on Spoken Language Processing (Interspeech 2006), Pittsburgh (2006)}}    {{reference|full= Wilcock, G., WikiTalk: A spoken Wikipedia-based open-domain knowledge access system (2012) Proceedings of the COLING-2012 Workshop on Question Answering for Complex Domains, Mumbai, pp. 57-69}}
|issn=3029743
 
 
|isbn=9783642393297
 
|isbn=9783642393297
 +
|issn=3029743
 
|doi=10.1007/978-3-642-39330-3_4
 
|doi=10.1007/978-3-642-39330-3_4
|language=English
+
|page-start=32
|type=conference paper
+
|page-end=42
 
}}
 
}}

Latest revision as of 20:57, October 11, 2015

Evaluation of WikiTalk - User studies of human-robot interaction is a 2013 conference paper written in English by Anastasiou D., Jokinen K., Wilcock G. and published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).

[edit] Abstract

The paper concerns the evaluation of Nao WikiTalk, an application that enables a Nao robot to serve as a spoken open-domain knowledge access system. With Nao WikiTalk the robot can talk about any topic the user is interested in, using Wikipedia as its knowledge source. The robot suggests some topics to start with, and the user shifts to related topics by speaking their names after the robot mentions them. The user can also switch to a totally new topic by spelling the first few letters. As well as speaking, the robot uses gestures, nods and other multimodal signals to enable clear and rich interaction. The paper describes the setup of the user studies and reports on the evaluation of the application, based on various factors reported by the 12 users who participated. The study compared the users' expectations of the robot interaction with their actual experience of the interaction. We found that the users were impressed by the lively appearance and natural gesturing of the robot, although in many respects they had higher expectations regarding the robot's presentation capabilities. However, the results are positive enough to encourage research on these lines.

[edit] References

This section requires expansion. Please, help!

Cited by

Probably, this publication is cited by others, but there are no articles available for them in WikiPapers.