People with complex communication needs can use a high-technology Augmentative and Alternative Communication (AAC) device to communicate with others. Currently, researchers and clinicians often use data logging from high-tech AAC devices to analyze AAC user performance. However, existing automated data logging systems cannot differentiate the authorship of the data log when more than one user accesses the device. This issue reduces the validity of the data logs and increases the difficulties of performance analysis. Therefore, this paper presents a solution using a deep neural network-based visual analysis approach to process videos to detect different AAC users in practice sessions. This approach has significant potential to improve the validity of data logs and ultimately to enhance AAC outcome measures.