Energy Cooperation and Information Freshness in Wireless Networks

Open Access
- Author:
- Leng, Shiyang
- Graduate Program:
- Electrical Engineering
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- February 27, 2020
- Committee Members:
- Aylin Yener, Dissertation Advisor/Co-Advisor
Aylin Yener, Committee Chair/Co-Chair
Thomas F La Porta, Committee Member
Viveck Ramesh Cadambe, Committee Member
Vinayak V Shanbhag, Outside Member
Ludmil Tomov Zikatanov, Outside Member
Kultegin Aydin, Program Head/Chair - Keywords:
- wireless communications
energy harvesting
wireless energy transfer
information freshness
age of information
game theory
Stackelberg game
machine learning
deep reinforcement learning
stochastic geometry
dynamic programming - Abstract:
- In this dissertation, we investigate energy cooperation and information freshness in wireless networks. As a promising solution for perpetual wireless communications, energy cooperation has extended a new dimension for cooperative networks. We study energy cooperation via wireless energy transfer in two-hop networks from the game-theoretic perspective and design incentive cooperation schemes for wireless nodes with individual interests. We propose the asymmetric wireless energy transfer model by taking into account the tradeoff of waiting for transmission and accumulating harvested energy to achieve better system utilities in rate and energy efficiency. Information freshness, i.e., age of information (AoI), has been introduced recently as a performance metric from a novel perspective. We first study the tradeoff between AoI and the conventional metric, rate, in wireless powered networks to gain insights on AoI and rate performance in both age-optimal and rate-optimal resource allocation policies. A learning-based algorithm is proposed to approach the near-optimal policies. Further, we consider cognitive radio networks with energy harvesting secondary users. For a simple pair of primary user and secondary user, we develop the optimal sensing and status updating policy for the secondary user via dynamic programming and prove the threshold structure of the policy. For large networks that secondary users harvesting energy from primary users, we characterize the outage of primary users and average AoI of secondary users by stochastic geometry. Random access update policies are found by stochastic optimization. Finally, the general scheduling and power control problem for AoI-minimum networks is studied with or without prior knowledge of the dynamic system states, namely, offline and online setting. We derive the optimal policy of the offline setting by solving a mixed integer linear program. We propose a deep recurrent Q-learning algorithm based on deep reinforcement learning to approach the minimal average AoI in the online setting.