Modeling, Prediction and Control of Engineering Systems with Long Short-Term Memory
Open Access
- Author:
- Fu, Yiwei
- Graduate Program:
- Mechanical Engineering
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- February 10, 2020
- Committee Members:
- Asok Ray, Dissertation Advisor/Co-Advisor
Asok Ray, Committee Chair/Co-Chair
Bo Cheng, Committee Member
Minghui Zhu, Outside Member
Shashi Phoha, Outside Member
Thomas Wettergren, Special Member
Daniel Connell Haworth, Program Head/Chair - Keywords:
- Artificial Intelligence
Deep Learning
Robotics
Representation Learning
Autonomous Systems
Time Series - Abstract:
- This dissertation focuses on various aspects of modeling, prediction and control of engineering systems with Long Short-Term Memory (LSTM) neural networks. In an effort to build artificial intelligence (AI) into engineering systems, it is imperative to reason about and to understand the complex world. During this process, an engineering system often generates sequential data from its sensors, sometimes even high-dimensional at each time instance. In order to process these data, several novel LSTM-based machine learning techniques are proposed in this dissertation. First, an LSTM probabilistic forecasting model for symbol sequences is proposed and applied to a combustion system. Then, an end-to-end imitation learning system that stacks a convolutional neural network (ConvNet) and an LSTM network together is developed and implemented on a small autonomous robot, generating real-time control signals for the robot. Next, a new recurrent neural network (RNN) architecture named DCRNN is proposed for better modeling of dynamical systems governed by differential equations. Furthermore, another new RNN architecture that combines both ConvNet and LSTM together is proposed and named LSTM-LSTM. Using generative adversarial network (GAN) training techniques, this LSTM-LSTM model is able to compress high-dimensional spatiotemporal data into a compact latent space, from which future predictions are generated. This representation learning module can fit into a reinforcement learning (RL) framework and can also incorporate actions that the agent takes. Extensive experiments and comparisons show superior performance of the proposed model over several state-of-the-art techniques.