An Exploration of Cognitive Assistants and their Challenges
![open_access](/assets/open_access_icon-bc813276d7282c52345af89ac81c71bae160e2ab623e35c5c41385a25c92c3b1.png)
Open Access
- Author:
- Maier, Torsten
- Graduate Program:
- Industrial Engineering (PHD)
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- June 14, 2021
- Committee Members:
- Saeed Abdullah, Outside Unit & Field Member
Ling Rothrock, Major Field Member
Chris McComb, Co-Chair & Dissertation Advisor
Jessica Menold, Co-Chair & Dissertation Advisor
Steven Landry, Program Head/Chair - Keywords:
- cognitive assistant
mental workload
trust
artificial intelligence
brainstorming
affordances - Abstract:
- Artificial intelligence (AI) is revolutionizing the world by allowing computers to perform near-human functions such as voice and visual recognition. Examples of AI in our daily lives includes purchase recommendations [1] and customer service [2]. By combining artificial intelligence with natural language interfaces (NLI; interfaces that use verbal commands), cognitive assistants can assist users with a variety of tasks ranging from engineering design [3] to healthcare applications [4]. However, the effectiveness of these devices in assisting humans is influenced by aspects of the device such as performance and reliability, aspects of the task such as the task type and complexity, and aspects of the user such as trust in the intelligent agent [5]. To fully understand and optimize how users interact with these devices, critical questions investigating the impact of AI and CAs on the human-computer relationship must be asked. Therefore, the objective of this dissertation was to explore and understand the cognitive challenges faced when working with AI and CAs. To address this objective, an ontology of the field of CAs was developed, an exploratory study to define and scope the cognitive challenges faced by CA users was performed, and a specific CA use-case was assessed. Finally, after these steps were completed, trust was identified as a factor inhibiting CA usability and a study to understand the inner workings of trust and performance in not only CAs but the larger field of AI as a whole was performed. Results of the CA studies found that the lack of physical affordances in CAs can lead to frustration and misunderstanding. Mental workload was found to be both positively and negatively impacted by CAs depending on the context. Additionally, CAs can positively impact performance and decrease frustration in users. Finally, trust was found to be a significant inhibitor of CA usability. The results of the trust and performance study in AI found the difference between human and AI performance to be a moderate predictor of change in trust. This differs from previous literature that found AI performance individually to be a good predictor of trust. However, this study was completed in a context where AI and human capabilities were similar which may account for this divergence from previous findings. Additionally, evidence suggests behaviors and biases typically seen in human-human interactions may also occur in human-AI interactions when AI transparency is low. Finally, results supported previous work indicating the importance of analogical trust processes.