Acoustical User Identification Based on MFCC Analysis of Keystrokes
Matus Pleva, Eva Kiktova, Jozef Juhar, Patrick Bours
DOI: 10.15598/aeee.v13i4.1466
Abstract
This paper introduces a novel approach of person identification using acoustical monitoring of typing the required word on the monitored keyboard. This experiment was motivated by the idea of COST IC1106 (Integrating Biometrics and Forensics for the Digital Age) partners to acoustically analyse the captured keystroke dynamics database using widely used time-invariant mathematical models tools. The MFCC (Mel-Frequency Cepstral Coefficients) and HMM (Hidden Markov Models) was introduced in this experiment, which gives promising results of 99.33% accuracy, when testing 25% of realizations (randomly selected from 100) identifying between 50 users/models. The experiment was repeated for different training/testing configurations and cross-validated, so this first approach could be a good starting point for next research including feature selection algorithms, biometric authentication score normalization, different audio & keyboard setup tests, etc.