Measuring the Effectiveness of Visual Analytics and Data Fusion Techniques on Situation Awareness in Cyber-security
Open Access
- Author:
- Giacobe, Nick A
- Graduate Program:
- Information Sciences and Technology
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- December 12, 2012
- Committee Members:
- David J Hall, Dissertation Advisor/Co-Advisor
Michael David Mcneese, Committee Member
Peng Liu, Committee Member
Mark Edward Ballora, Committee Member - Keywords:
- situation awareness
cyber-security
simulation
data fusion
visualization
visual analytics - Abstract:
- Cyber-security involves the monitoring a complex network of inter-related computers to prevent, identify and remediate from undesired actions. This work is performed in organizations by human analysts. These analysts monitor cyber-security sensors to develop and maintain situation awareness (SA) of both normal and abnormal activities that occur on the network. Additionally, analysts remediate compromised computers and attempt to configure networks securely to ensure that known vulnerabilities cannot be exploited. Research and development of new fusion algorithms and visual interface, both in academia and in industry, have the objective of increasing cyber-security situation awareness. However, it is uncommon for developers to assess the actual impact on a human analyst’s performance, situational knowledge, perceived effectiveness or perceived workload. In short, it is not proven that a new tool will increase situation awareness or not, because the analyst’s SA is not measured. This dissertation addresses the issue of measuring the impact of interface design on SA through the implementation of an SA Assessment Battery for the cyber domain. While there are a number of SA assessment techniques, they are designed for other domains, especially military command and control, aircraft piloting and air traffic control. This dissertation leverages this work and implements these assessment techniques for the cyber-security domain. This work validates the assessment battery through the comparison of two different interface designs (“high” and “low” perceived workload) with two groups of research subjects (“novices” and “experts”) in a 2x2 between-subjects experiment. The interpretation of the results from this study demonstrates how SA assessment techniques that were previously considered incompatible can be used together to evaluate the impact of the effectiveness of the interface separately from the impact of experience of the human analyst.