Endobronchial Video Analysis and CT-Video Fusion

Open Access
Author:
Byrnes, Patrick Daniel
Graduate Program:
Electrical Engineering
Degree:
Doctor of Philosophy
Document Type:
Dissertation
Date of Defense:
July 19, 2017
Committee Members:
  • William E. Higgins, Dissertation Advisor
  • William E. Higgins, Committee Chair
  • Vishal Monga, Committee Member
  • Kenneth Jenkins, Committee Member
  • Robert Collins, Outside Member
Keywords:
  • lung cancer
  • video summarization and abstraction
  • video analysis
  • keyframe extraction
  • ct-video fusion
  • ct-video registration
  • bronchoscopy
Abstract:
Bronchoscopy enables a number of minimally invasive chest procedures for diseases such as lung cancer and asthma. For example, using the bronchoscope's continuous video stream as a guide, a physician can navigate through the lung airways to examine general airway health, collect tissue samples, or administer a disease treatment. Aside from its intraoperative utility, the recorded video provides high-resolution detail of the airway mucosal surfaces and a record of the bronchoscopic procedure. Unfortunately, little use is made of the acquired video stream, a potentially invaluable source of information. One reason for this is that bronchoscopic examination of the lungs produces a considerable amount of endobronchial video. Due to a lack of robust automatic video-analysis methods to summarize this immense data source, it is essentially discarded after the procedure. In addition, little effort has been made to link the video to the detailed anatomical information given by a patient's preoperative 3D {CT} chest scan. In this thesis, we present methods for constructing a multi-modal {CT}-video chest model that enables more efficient and practical {CT}-video fusion for use in post-operative analysis and additional follow-up procedures in the pulmonary-disease management workflow. In particular, we develop a method for parsing endobronchial video for the purpose of summarization, browsing, and retrieval. Additionally, we introduce a semi-automatic registration method that utilizes the parsed video sequence, along with interactive {CT}-video alignment of a small number video frames, to produce spatial-linkage data between the summarized video and an airway-tree model extracted from an associated {3D} {CT} chest scan. Given an appropriate number of manual registrations, this method is capable of registering an entire endobronchial video sequence, regardless of its duration. Taken together, the data sources generated from these methods constitute the CT-video chest model. The resulting chest model serves as the foundation for a suite of video-analysis tools we incorporate into an existing image-guided intervention (IGI) system known as the Virtual Navigator System (VNS). The integrated tool suite, known as the VNS Video Analysis Subsystem, quickly conveys the summarized contents of a processed video sequence to a user. Moreover, the system enables advanced interactive 3D visualization of fused CT-video data and provides for a number of applications relevant to post-operative analysis of endobronchial video. To the best of our knowledge, the VNS Video Analysis Subsystem represents the first comprehensive video analysis system to synergistically combine {CT} and endobronchial video. We test our system and methods using real video taken from lung-cancer patient studies. Our incorporated methods consistently select salient representative keyframes appropriately distributed throughout a processed video sequence, enabling quick and accurate post-operative review of the bronchoscopic examination. Additionally, we demonstrate a proof-of-concept multiple-bronchoscopy application using two case studies that have associated multi-modal data sets comprised of white light bronchoscopy (WLB) and narrow band imaging (NBI) bronchoscopic video modalities. This novel application represents the first quantitative graphical linkage between multiple modalities within the workflow for bronchoscopy.