Optimizing Energy and Performance for Next-Generation Extended Reality Devices

Restricted (Penn State Only)
- Author:
- Zhao, Shulin
- Graduate Program:
- Computer Science and Engineering
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- December 17, 2021
- Committee Members:
- Chitaranjan Das, Major Field Member
Mahmut Kandemir, Co-Chair & Dissertation Advisor
Anand Sivasubramaniam, Co-Chair & Dissertation Advisor
Dinghao Wu, Outside Unit & Field Member
Chitaranjan Das, Program Head/Chair - Keywords:
- Extended Reality
Virtual Reality
Augmented Reality
Energy-effiency
Performance - Abstract:
- Extended Reality (XR) techniques have recently revolutionized our lives across diverse domains, including but not limited to sports, media, healthcare, and gaming, by enabling a 360-degree artificial sensory stimulation with a Virtual Reality (VR) or Augmented Reality (AR) headset, as well as connecting the physical world with virtual/augmented information created in an intelligent (i.e., with neural network assistance) and real-time fashion. The integration of the real-time principle of visualizing fast temporally changing 3D scenes into the resource-constraint and battery-operated XR edge devices is one of the most promising but also challenging research problems for architects. In other words, a good XR design requires both energy-efficiency and high quality of service (QoS). From the energy consumption perspective, unlike the conventional planar video processing where memory access is the main bottleneck, in 360-degree VR/AR processing pipeline, the compute (e.g., rendering and holographic displaying with various input samples such as sensors, color and depth images) is the primary bottleneck and contributes to more than 50% energy consumption in battery-operated headsets. Thus, improving the computational efficiency of the video pipeline is critical. On the other hand, from the performance point of view, although several emerging techniques such as deep neural networks (DNNs) for higher accuracy and 3D holographic display for more realistic scene viewing have been integrated into the XR pipeline to improve the QoS, however, other performance metrics such as DNN inference throughput and hologram execution latency have been ignored by prior works. Hence, an entire design space exploration from both hardware and software angles is required to efficiently optimize the XR processing pipeline. The objective of this proposal is to maximize the energy efficiency without jeopardizing the video processing performance for XR devices. Towards this, the proposal aims to address the following four issues. (i) Design of an energy-efficient sensing platform to optimize the energy consumption of various input sensors (e.g., inertial measurement unit, depth sensor, etc.) attached to an XR headset; (ii) Design of a distributed inference framework to leverage the available compute resources and improve the inference throughput; (iii) Exploiting opportunities for temporal reuse by memoizing head orientation and spatial reuse by establishing a relationship between left and right eye projection in the $360\degree$ video processing; and (iv) Taking advantage of temporal similarity in users' head movement and eye tracking to reduce computation and perform the hologram execution in a ``just-in-time'' manner. All the four tasks can interactively improve the performance and energy efficiency for the next generation XR headsets.