Adversarial Attacks and Representation Learning for Graph-structured Data

Open Access
- Author:
- Sun, Yiwei
- Graduate Program:
- Computer Science and Engineering
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- February 12, 2021
- Committee Members:
- Vasant Gajanan Honavar, Dissertation Advisor/Co-Advisor
Suhang Wang, Dissertation Advisor/Co-Advisor
Sencun Zhu, Committee Member
Kamesh Madduri, Committee Member
Soundar Rajan Tirupatikumara, Outside Member
Chitaranjan Das, Program Head/Chair
Vasant Gajanan Honavar, Committee Chair/Co-Chair - Keywords:
- Adversarial Attacks
Graph Mining
Graph Representation Learning - Abstract:
- Graph-structured data are ubiquitous across many domains such as e-commerce, social networks, professional networks, and finance. In many applications, the structure of the graph is explicit; in others, the underlying graph structure is implicit, i.e., it needs to be inferred from data. The graphs can be homogeneous, i.e., consisting of a single type of nodes and a single type of links, or heterogeneous, i.e., consisting of multiple types of nodes and links. Multi-view graphs are a special class of heterogeneous networks consisting of a single type of nodes and multiple types of links. Graph neural networks (GNNs) which leverage modern deep learning methods to exploit the graph topological properties, as well as the node and link attributes, offer a powerful approach for graph mining problems. However, recent studies show that GNNs are vulnerable to attacks aimed at reducing their performance on graph-structured data. Existing studies of adversarial attacks on GNNs focus primarily on manipulating the connectivity between existing nodes, a task that requires greater effort on the part of the attacker in real-world applications. Moreover, existing studies of adversarial attacks on GNNs are limited to manipulating graph topological structures in the homogeneous graph and are inapplicable to GNN models built from heterogeneous graphs and implicit graph-structured data. This dissertation focuses on understanding adversarial attacks on GNNs. Specifically, it aims to answer four inter-related research questions: (i) How can one attack homogeneous graph-structured data to reduce the performance of a GNN trained on such data? (ii) How can one learn compact information-preserving representations of heterogeneous, specifically, multi-view graph-structured data? (iii) How can one attack heterogeneous graph-structured data to reduce the performance of the GNNs trained on such data? (iv) How can one learn compact information-preserving representations from implicit graph-structured data? We propose four novel GNN models and algorithms for adversarial attacks on, and representation learning from, graph-structured data.