Deciphering Deception: How Different Rhetoric of AI Language Impacts Users’ Sense of Truth in LLMs

Dahey Yoo, Hyunmin Kang, Changhoon Oh

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Users are increasingly exposed to AI-generated language, presenting potential deception and communication risks. This study delved into the rhetorical aspect of AI-generated language influencing users’ truth discernment. We conducted a user study comparing three levels of rhetorical presence and four persuasive rhetorical elements, using interviews to understand users’ truth-detection methods. Results showed that outputs with fewer rhetorical elements posed challenges for users in distinguishing truth from false, while those with more rhetoric often misled users into false truths. Users’ AI expectations influenced truth judgments, with responses meeting expectations perceived as more truthful. Casual, human-like responses were often deemed false, while technical, precise AI responses were preferred. This research emphasizes that rhetorical elements of AI language can significantly bias individuals regardless of a statement’s actual truth. For enhanced transparency in human-AI communication, it is advisable for AI designs to thoughtfully integrate rhetorical elements and establish guiding principles aimed at minimizing the potential for deceptive responses.

Original languageEnglish
JournalInternational Journal of Human-Computer Interaction
DOIs
Publication statusAccepted/In press - 2024

Bibliographical note

Publisher Copyright:
© 2024 Taylor & Francis Group, LLC.

All Science Journal Classification (ASJC) codes

  • Human Factors and Ergonomics
  • Human-Computer Interaction
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Deciphering Deception: How Different Rhetoric of AI Language Impacts Users’ Sense of Truth in LLMs'. Together they form a unique fingerprint.

Cite this