The Community for Technology Leaders
2018 IEEE Symposium on Security and Privacy (SP) (2018)
San Francisco, CA, US
May 21, 2018 to May 23, 2018
ISSN: 2375-1207
ISBN: 978-1-5386-4353-2
pp: 253-269
Yimin Chen , Arizona State University
Tao Li , Arizona State University
Rui Zhang , University of Delaware
Yanchao Zhang , Arizona State University
Terri Hedgpeth , Arizona State University
Keystroke inference attacks pose an increasing threat to ubiquitous mobile devices. This paper presents EyeTell, a novel video-assisted attack that can infer a victim's keystrokes on his touchscreen device from a video capturing his eye movements. EyeTell explores the observation that human eyes naturally focus on and follow the keys they type, so a typing sequence on a soft keyboard results in a unique gaze trace of continuous eye movements. In contrast to prior work, EyeTell requires neither the attacker to visually observe the victim's inputting process nor the victim device to be placed on a static holder. Comprehensive experiments on iOS and Android devices confirm the high efficacy of EyeTell for inferring PINs, lock patterns, and English words under various environmental conditions.
mobile-devices, keystroke-inference, video-analysis, security

Y. Chen, T. Li, R. Zhang, Y. Zhang and T. Hedgpeth, "EyeTell: Video-Assisted Touchscreen Keystroke Inference from Eye Movements," 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, US, , pp. 253-269.
496 ms
(Ver 3.3 (11022016))