Utilizing Generative Aversarial Networks To Generate Training Data From Text
Open Access
Author:
Cao, Jeffery
Graduate Program:
Computer Science
Degree:
Master of Science
Document Type:
Master Thesis
Date of Defense:
November 11, 2020
Committee Members:
Thomas F La Porta, Thesis Advisor/Co-Advisor Chitaranjan Das, Program Head/Chair Ting He, Committee Member
Keywords:
Generative Adversarial Network GAN text-to-image GAN zero-shot machine learning image classification
Abstract:
In recent years the use of Generative Adversarial Networks or GANs have spiked heavily in many domains. There has been much research interest in utilizing GANs to generate fake images in particular. Recently, many researchers have taken it a step further and started to generate images using only the textual descriptions of objects. For example, by using only a textual description of a flower, like "A yellow flower with five petals and a pink stamen", a GAN can create a realistic image. Due to the increased complexity and effectiveness of GANs we can now create realistic 256 by 256 images from only a textual description. We wish to take this a step further and see how this generated data performs as training data for a classifier and whether it can be used to classify a class it has never seen before.