TY - JOUR
T1 - Deciphering Deception
T2 - How Different Rhetoric of AI Language Impacts Users’ Sense of Truth in LLMs
AU - Yoo, Dahey
AU - Kang, Hyunmin
AU - Oh, Changhoon
N1 - Publisher Copyright:
© 2024 Taylor & Francis Group, LLC.
PY - 2024
Y1 - 2024
N2 - Users are increasingly exposed to AI-generated language, presenting potential deception and communication risks. This study delved into the rhetorical aspect of AI-generated language influencing users’ truth discernment. We conducted a user study comparing three levels of rhetorical presence and four persuasive rhetorical elements, using interviews to understand users’ truth-detection methods. Results showed that outputs with fewer rhetorical elements posed challenges for users in distinguishing truth from false, while those with more rhetoric often misled users into false truths. Users’ AI expectations influenced truth judgments, with responses meeting expectations perceived as more truthful. Casual, human-like responses were often deemed false, while technical, precise AI responses were preferred. This research emphasizes that rhetorical elements of AI language can significantly bias individuals regardless of a statement’s actual truth. For enhanced transparency in human-AI communication, it is advisable for AI designs to thoughtfully integrate rhetorical elements and establish guiding principles aimed at minimizing the potential for deceptive responses.
AB - Users are increasingly exposed to AI-generated language, presenting potential deception and communication risks. This study delved into the rhetorical aspect of AI-generated language influencing users’ truth discernment. We conducted a user study comparing three levels of rhetorical presence and four persuasive rhetorical elements, using interviews to understand users’ truth-detection methods. Results showed that outputs with fewer rhetorical elements posed challenges for users in distinguishing truth from false, while those with more rhetoric often misled users into false truths. Users’ AI expectations influenced truth judgments, with responses meeting expectations perceived as more truthful. Casual, human-like responses were often deemed false, while technical, precise AI responses were preferred. This research emphasizes that rhetorical elements of AI language can significantly bias individuals regardless of a statement’s actual truth. For enhanced transparency in human-AI communication, it is advisable for AI designs to thoughtfully integrate rhetorical elements and establish guiding principles aimed at minimizing the potential for deceptive responses.
KW - AI-generated language
KW - ALIED theory, language expectancy theory
KW - deception detection
KW - human AI communication
KW - rhetoric
UR - http://www.scopus.com/inward/record.url?scp=85186430970&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85186430970&partnerID=8YFLogxK
U2 - 10.1080/10447318.2024.2316370
DO - 10.1080/10447318.2024.2316370
M3 - Article
AN - SCOPUS:85186430970
SN - 1044-7318
JO - International Journal of Human-Computer Interaction
JF - International Journal of Human-Computer Interaction
ER -