Abstract
This study examined the factors that affect artificial intelligence (AI) chatbot users' use of profanity and offensive words, employing the concepts of ethical ideology, social competence, and perceived humanlikeness of chatbot. The study also looked into users' liking of chatbots' responses to the users' utterance of profanity and offensive words. Using a national survey (N = 645), the study found that users' idealism orientation was a significant factor in explaining use of such offensive language. In addition, users with high idealism revealed liking of chatbots' active intervention, whereas those with high relativism displayed liking of chatbots' reactive responses. Moreover, users’ perceived humanlikeness of chatbot increased their likelihood of using offensive words targeting dislikable acquaintances, racial/ethnic groups, and political parties. These findings are expected to fill the gap between the current use of AI chatbots and the lack of empirical studies examining language use.
Original language | English |
---|---|
Article number | 106795 |
Journal | Computers in Human Behavior |
Volume | 121 |
DOIs | |
Publication status | Published - 2021 Aug |
Bibliographical note
Funding Information:This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of the Republic of Korea ( NRF-2017S1A5A8022666 ).
Publisher Copyright:
© 2021 Elsevier Ltd
All Science Journal Classification (ASJC) codes
- Arts and Humanities (miscellaneous)
- Human-Computer Interaction
- Psychology(all)