Sensor Aware Machine Learning For Edge Devices

Open Access
- Author:
- Kim, Dong
- Graduate Program:
- Computer Science and Engineering
- Degree:
- Master of Science
- Document Type:
- Master Thesis
- Date of Defense:
- April 15, 2019
- Committee Members:
- Vijaykrishnan Narayanan, Thesis Advisor/Co-Advisor
Kyusun Choi, Committee Member
John Morgan Sampson, Committee Member - Keywords:
- convolutional neural network
machine learning
synthetic dataset - Abstract:
- Neural networks have produced breakthroughs in numerous domains, and many owe their success to the availability of large, labeled datasets. These datasets help us solve the problems for which they were designed, but are ineffective at yielding solutions to problems that differ drastically in context (e.g. daytime versus nighttime images). Even if the underlying task (e.g. species recognition) is very similar, the deployment conditions can vary so much that more labeled samples are needed. For instance, even if prior efforts expected that determining the species of an animal likely requires images of the animal at different times of the day, they may not have also considered indoor versus outdoor conditions in the training set that would impact zoo versus domestic versus in-nature classification rates. Despite an increasing number of public datasets, there are always more to be desired, namely more labeled datasets on which to learn for new tasks that are not yet popular enough to have justified the manual efforts. In this thesis, we synthesize training datasets with awareness to environmental factors, specifically to lighting conditions and multiple viewpoints of the same object. We explore synthesizing the dataset as a mean to circumvent limitations of manually labeling and collecting samples under a larger range of potential environments. We consider how context aware datasets might produce models for achieving best classification accuracy under different deployment conditions, and how the correct model to use can be predicted by an endpoint device. We conclude that awareness of viewpoint matters more than aware of lighting condition, and that ultimately, training for the specific environment conditions produces the best model for that particular environment.