Vehicle Localization Using Low-Accuracy GPS, IMU and Map-Aided Vision

Open Access
Gupta, Vishisht Vijay
Graduate Program:
Mechanical Engineering
Doctor of Philosophy
Document Type:
Date of Defense:
March 12, 2009
Committee Members:
  • Sean N Brennan, Dissertation Advisor
  • Sean N Brennan, Committee Chair
  • Henry Joseph Sommer Iii, Committee Member
  • Alok Sinha, Committee Member
  • Robert T Collins, Committee Member
  • Jack W Langelaan, Committee Member
  • GPS
  • Kalman Filter
  • IMU
  • Vehicle Localization
  • Terrain-Aided Vision
  • Map-Aided Vision
This dissertation describes the development of vehicle state estimation methods using low-cost sensors, their implementation, and comparison with highly accurate vehicle state estimators available today. This research was motivated by the problem of navigating a vehicle on a highway, where it is desirable to closely measure the vehicle's state (absolute position and orientation, rotation rates etc.) to achieve electronic stability control, collision avoidance, driver alert systems for lane departure and ultimately autonomous navigation. The focus in this thesis is to develop low-cost methods for vehicle localization. Low-cost Commercial Off-the-Shelf (COTS) sensor systems have been used to this effect. A framework is developed to combine measurements from Global Positioning System (GPS) and Inertial Measurement Unit (IMU). Performance of a low-cost GPS receiver operating in autonomous mode integrated with a MEMS based low-cost IMU is investigated. The error sources in GPS and INS systems are characterized to choose suitable stochastic models for the error sources and to identify parameters for these models. Vehicle velocity vector is used to improve the yaw angle estimate under low yaw angle observability conditions. To obtain an independent direct measurement of the vehicle orientation, a novel method based on terrain-aided vision is developed. This method is based on matching images captured from an on-vehicle camera to a rendered representation of the surrounding terrain obtained from an on-board map database. United States Geographical Survey Digital Elevation Maps (DEMs) were used to create a 3D topology map of the geography surrounding the vehicle. The horizon lines seen in the captured video from the vehicle are compared to the horizon lines obtained from the rendered geography, allowing absolute comparisons between rendered and actual scene in roll, pitch and yaw. Work on terrain-aided vision based orientation estimation has been extended to use near field features like road signs and road markers. Near field features allow the measurement of vehicle position in addition to vehicle orientation. A map-aided vision algorithm is presented which registers features in the rendered images with features in real images using gradient-based minimization of sum of squared intensities. To improve the convergence properties as well as convergence time of the vision algorithm, an IMU is used to predict the location and possible variability of features in the rendered representation defining a Region-Of-Interest (ROI). A Kalman filter framework is used to fuse the measurements from an IMU and each of the position and orientation estimation methods mentioned above. Numerical simulations are done in each case to verify the correctness of the formulation. Finally, experiments are performed at the Pennsylvania Transportation Institute (PTI) test track facility to test the performance of each method against a highly accurate GPS/IMU system.