Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks

Open Access
Hu, Nan
Graduate Program:
Computer Science and Engineering
Doctor of Philosophy
Document Type:
Date of Defense:
October 07, 2016
Committee Members:
  • Thomas F Laporta, Dissertation Advisor
  • Thomas F Laporta, Committee Chair
  • Patrick Drew Mcdaniel, Committee Member
  • Sencun Zhu, Committee Member
  • Costas D Maranas, Outside Member
  • stochastic resource allocation
  • markov decision process
  • uncertainty
  • sensor network
Support for intelligent and autonomous resource management is one of the key factors to the success of modern sensor network systems. The limited resources, such as exhaustible battery life, moderate processing ability and finite bandwidth, restrict the system’s ability to simultaneously accommodate all missions that are submitted by users. In order to achieve the optimal profit in such dynamic conditions, the value of each mission, quantified by its demand on resources and achievable profit, need to be properly evaluated in different situations. In practice, uncertainties may exist in the entire execution of a mission, thus should not be ignored. For a single mission, uncertainty, such as unreliable wireless medium and variable quality of sensor outputs, both demands and profits of the mission may not be deterministic and may be hard to predict precisely. Moreover, throughout the process of execution, each mission may experience multiple states, the transitions between which may be affected by different conditions. Even if the current state of a mission is identified, because multiple potential transitions may occur each leading to different consequences, the subsequent state cannot be confirmed until the transition actually occurs. In systems with multiple missions, each with uncertainties, a more complicated circumstance arises, in which the strategy for resource allocation among missions needs to be modified adaptively and dynamically based on both the present status and potential evolution of all missions. In our research, we take into account several levels of uncertainties that may be faced when allocating limited resources in dynamic environments as described above, where the concepts of missions that require resources may be matched to those as in certain network applications. Our algorithms calculate resource allocation solutions to corresponding scenarios and always aim to achieve high profit, as well as other performance improvements (e.g., resource utilization rate, mission preemption rate, etc.). Given a fixed set of missions, we consider both demands and profits as random variables, whose values follow certain distributions and may change over time. Since the profit is not constant, rather than achieving a specific maximized profit, our objective is to select the optimal set of missions so as to maximize a certain percentile of their combined profit, while constraining the probability of resource capacity violation within an acceptable threshold. Note that, in this scenario, the selection of missions is final and will not change after the decision has been made. Therefore, this static solution only fits in the applications with long-term running missions. For the scenarios with both long-term and short-term missions, to increase the total achieved profit, instead of selecting a fixed mission set, we propose a dynamic strategy which tunes mission selections adaptively to the changing environments. We take a surveillance application as an example, where missions are targeting specific sets of events, and both demands and profits of a mission depend on which event is actually occurring. To some extent resources should be focused on those high-valued events with a high probability of occurring; on the other hand, resources should also be distributed to gain an understanding of the overall condition of the environment. We develop Self-Adaptive Resource Allocation algorithm (SARA) to model mission execution as Markov processes, in which the states are decided by the combination of occurring events. In this case, resources need to be allocated before the events actually occur, otherwise, the mission will miss the event due to lack of support. Therefore, a prediction as to which events are about to occur is necessary, and when the prediction fails, in exchange of the loss of profit, the mistakenly allocated resources collect information to assist prediction in the future. When the transitions between mission states can be controlled by taking certain maneuvers at the proper time, the probability of the cases when missions transit to lower profit states may be decreased. As a consequence, sometimes a loss of profit may be avoided. We model this problem as a Semi-Markov Decision Process, and propose Action-Drive Operation Model With Evaluation of Risk and Executability (ADOM-ERE) to calculate optimal maneuvers. One challenge is that the state transitions can be affected not only by states and actions, but also by external risks and competition for resources. On one hand, external risks (e.g., a DoS attack) may change the existing transition probabilities between states; on the other hand, taking actions to avoid lower profit states may require special constrained resources. As a result, sometimes lower profit missions may not choose its optimal action because of resource exhaustion. ADOM-ERE considers all of states, actions, risks and competition when searching for the optimal allocation solution, and is available for both scenarios in which resources for actions are managed either centralized or managed in a distributed way. Numerical simulation are performed for all algorithms, and the results are compared with several competitive works to show that our solutions are better in terms of higher profit achieved in corresponding settings.