NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture

Yuhan Luo, Bongshin Lee, Young Ho Kim, Eun Kyoung Choe

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Speech as a natural and low-burden input modality has great potential to support personal data capture. However, little is known about how people use speech input, together with traditional touch input, to capture different types of data in self-tracking contexts. In this work, we designed and developed NoteWordy, a multimodal self-tracking application integrating touch and speech input, and deployed it in the context of productivity tracking for two weeks (N = 17). Our participants used the two input modalities differently, depending on the data type as well as personal preferences, error tolerance for speech recognition issues, and social surroundings. Additionally, we found speech input reduced participants' diary entry time and enhanced the data richness of the free-form text. Drawing from the findings, we discuss opportunities for supporting efficient personal data capture with multimodal input and implications for improving the user experience with natural language input to capture various self-tracking data.

Original languageEnglish
Pages (from-to)568-591
Number of pages24
JournalProceedings of the ACM on Human-Computer Interaction
Volume6
Issue numberISS
DOIs
Publication statusPublished - 2022 Nov 14

Bibliographical note

Publisher Copyright:
© 2022 ACM.

All Science Journal Classification (ASJC) codes

  • Social Sciences (miscellaneous)
  • Human-Computer Interaction
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture'. Together they form a unique fingerprint.

Cite this