Interpretable Artificial Intelligence Models to Detect Chronic and Infectious Diseases
Open Access
- Author:
- Zokaeinikoo, Maryam
- Graduate Program:
- Industrial Engineering
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- June 11, 2020
- Committee Members:
- Prasenjit Mitra, Dissertation Advisor/Co-Advisor
Soundar Rajan Tirupatikumara, Dissertation Advisor/Co-Advisor
Hui Yang, Committee Member
Qiushi Chen, Committee Member
Akhil Kumar, Outside Member
Steven James Landry, Program Head/Chair
Prasenjit Mitra, Committee Chair/Co-Chair
Soundar Rajan Tirupatikumara, Committee Chair/Co-Chair - Keywords:
- Interpretable
Deep learning
Alzheimer's
COVID-19
Attention mechanism - Abstract:
- Deep learning and artificial intelligence methods have revolutionized computational analytics by helping solve complex problems in many application domains, including healthcare and medicine. Deep learning methods comprise of multiple linear or nonlinear layers, enabling them to learn sophisticated features and subtle patterns from high-dimensional input data. However, typical deep neural network methods are often considered black-box models as they do not provide adequate insights into interpreting their predictions. This has posed challenges to the successful implementation of deep learning models in practice, especially in healthcare, where transparency and interpretability of models are critical to their application in practice. This dissertation uses natural language processing, audio processing, and computer vision techniques along with deep learning to develop accurate and interpretable methods to detect chronic and infectious diseases. Three specific research topics are considered. The first research topic focuses on detecting the onset of Alzheimer's disease using transcript of interviews of individuals who were asked to describe a picture. We developed a hierarchical recurrent neural network (RNN) model for natural language processing using a novel attention over self-attention mechanism to model the temporal dependencies of longitudinal data. We demonstrate the interpretability of the model with the importance score of words, sentences, and transcripts extracted from the three-level neural network model. The second problem we address seeks to eliminate the need for transcribing interviews by developing an end-to-end interpretable deep learning model for detecting Alzheimer's disease using raw audio interviews of patients. Our methods using both the text and the audio models achieve new benchmark accuracy performances compared to previous works. These artificial intelligence models can help diagnose Alzheimer's disease in a non-invasive and affordable manner, improve patient outcomes, and contain cost. Third, we focused on detecting the Coronavirus Disease 2019 (COVID-19) from chest X-ray and Computed Tomography images. A novel hierarchical attention neural network model is developed to classify chest radiography images as belonging to a person with either COVID-19, other infections, or no pneumonia. The model's hierarchical structure captures the dependency of features and improves model performance while the attention mechanism makes the model interpretable and transparent. This model can be used in conjunction with or instead of laboratory testing (e.g., where laboratory testing is unavailable) to detect and isolate individuals with COVID-19 and prevent onward transmission to the general population and healthcare workers. This dissertation effectively illustrates the use of deep learning methods in textual, audio and visual data in medical informatics. Future work in this domain needs to focus on building integrated techniques and platforms to address the integration of the three modalities in specific problem scenarios.