Collaborative Inference for Distributed Camera System
Open Access
- Author:
- Hakimi, Zeinab
- Graduate Program:
- Computer Science and Engineering
- Degree:
- Master of Science
- Document Type:
- Master Thesis
- Date of Defense:
- April 20, 2019
- Committee Members:
- Vijaykrishnan Narayanan, Thesis Advisor/Co-Advisor
John Morgan Sampson, Committee Member
Bhuvan Urgaonkar, Program Head/Chair - Keywords:
- Multi View Convolutional Neural Network
Distributed System
Context Awareness
DNN
Object Recognition - Abstract:
- Recently, it has been verified that making use of multiple sources of sensor data can improve the accuracy of inference in distributed networks. However, there are two challenges to achieve this goal: (i) Sensors inherently provide information with different level of quality. Therefore, it is essential to identify the information contribution of each sensor in order to enhance the efficiency of the overall system and, (ii) Unequal information contribution of sensors necessitates to re-examine the assumption of noise tolerance or model failure in distributed systems. To tackle the first challenge, this thesis proposes a Multi-View Convolutional Neural Network (MVCNN) for distributed camera systems which leverages the likelihood estimation. We used entropy estimation in order to reduce the communication cost between front-end and back-end and enhance the performance of object classification over the system. Applying our framework to Princeton 3D CAD ModelNet dataset and iLab-80M with 12 views, we reached 89% top-1 classification accuracy in tested datasets by pruning 66.7% of sensor nodes. To address the second challenge, we designed a fault-tolerant mechanism for situations when our network can not overcome failures. Experiments demonstrate that our MVCNN maintains nearly-optimal accuracy when the fraction of noisy failures are less than 40%. Moreover, proposed robustness mechanism increases accuracy by 8.54% in the presence of 60% nodes’ failures.