Adversarial Examples in Constrained Domains

Restricted (Penn State Only)
Author:
Sheatsley, Ryan Matthew
Graduate Program:
Computer Science and Engineering
Degree:
Master of Science
Document Type:
Master Thesis
Date of Defense:
November 13, 2018
Committee Members:
  • Patrick Drew Mcdaniel, Thesis Advisor
Keywords:
  • machine learning
  • adversarial machine learning
  • network intrusion detection
Abstract:
Recent advances in computer science and engineering have enabled machine learning to be at the center of many industries, including transportation, finance, healthcare, education, and even security. However, as we enter into this revolution of automation, machine learning presents its own unique list of challenges and inherent flaws. Novel research into these phenomena have unveiled adversarial examples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. These adversarial examples present a barrier to the adoption of machine learning to the aforementioned fields, particularly security. In this paper, we present a methodology for understanding the impact of these adversarial examples in a domain, particularly network intrusion detection. Furthermore, we show that by leveraging inherenet constraints enforced through these domains, the space of legitimate adversarial examples is limited, simplfying defenses against these malicious anomalies.