TOWARDS A FRAMEWORK FOR AUTOMATED EVALUATION OF COMPLEX PROBLEM SOLVING ENVIRONMENTS

Open Access
Author:
Bhandarkar, Damodar
Graduate Program:
Industrial Engineering
Degree:
Doctor of Philosophy
Document Type:
Dissertation
Date of Defense:
December 13, 2007
Committee Members:
  • Ling Rothrock, Committee Chair
  • Soundar T Kumara, Committee Member
  • Richard Donovan Koubek, Committee Member
  • Frank Edward Ritter, Committee Member
Keywords:
  • simulation.
  • human computer interaction
  • human factors
Abstract:
The use of Interactive simulation models has gained increasing popularity over the last decade towards human training and evaluation. These simulations provide a unique opportunity to study complex problem solving behavior in real life tasks, which otherwise are difficult to recreate in a laboratory. While the complexity of the tasks provide a rich assortment of problems to study human performance in a wide variety of complex settings, the increasing complexity and cost involved in user experimentation, often with human experts, adds considerable time and cost to the experiments. There is, thus, enormous implication from the possibilities of representing automatic decision making behavior in complex simulation models by synthesizing knowledge of decision processes, limitations and capabilities of human participants early on in the design process. By introducing automatic decision making, the total available simulation experiment time and resources may then be used more efficiently to expand simulation experiment scope, explore new alternatives, or perform added simulation sensitivity and validation activities. Therefore, techniques to generate automatic decision approximating expert decision-making behavior are of interest, both as an automatic decision making component in simulation model experiments and as a tool to assess decision making level of performance. The main objective in this research is to explore automation techniques to conduct effort-accuracy analysis of human strategies in complex simulation models. Of particularinterest and major challenge is the conceptualization and representation of behavior so that relationships between wide ranging human strategies and their differential performance can be assessed and explained in a quantitative manner. To address this representational issue, this study assumes a theoretical framework of human information processing that accounts for the multiple activities of effort-accuracy trade-off. By approximating strategies over attribute processing and plan processing, an argument is made that a holistic representation, better suited for predicting effort-accuracy relationships of broad range of information processing strategies can be achieved. Using this representation, this study models a human-machine task environment using a Constraint Satisfaction Framework. Behavior, in this framework, is generated using a search mechanism that takes into account, both the cost and accuracy of processing. Because of this quantitative representation, strategies are represented as constraints into the model that controls the search through an underlying network of tasks. We demonstrate this framework in the analysis of strategies in a path-planning task in an existing simulation of a Command-and-Control domain. The proposed constraintbased framework provides a useful tool to conduct such analysis because this framework also allows a number of other environment variables of interest to be presented as constraints in an additive and independent manner. Due to this, the differential performance of information processing strategies under varying conditions can be predicted, thus providing an estimate of the kind of adaptive behavior that may be expected from an expert human in complex problem solving tasks. Lastly, the utility of the proposed approach is demonstrated using an empirical study with 24 human participants. The study benchmarks the correspondence between model outcomes with observed human outcomes in the path-planning task. Specifically, the study examines the impact of human’s observed process parameters on the expert model, and compares differences between the expert and the human performance over categories of data. The results, while showing high correlation between model and human data, also suggested deviations in performance. Data analysis suggested that performance deviations between model and human data correlated with search space. Subsequent analysis indicated that this difference in performance appeared because human participants were seen as employing simplification procedures during their interaction with the task as opposed to the “ideal” procedures used by the model. In sum, while the approach presented here showed promise towards the representation and analysis of human information processing, it also draws attention to the challenges and the considerations that must be given in future research involving human strategies. In closing, this research extends existing literature in studying human adaptive strategies beyond simple choice tasks and provides a foundation for future studies involving complex problem solving.