When Milliseconds Matter: Evaluating the Vulnerability of High Frequency Trading Models to Adversarial Manipulation

Restricted (Penn State Only)
- Author:
- Chakraborty, Karmabir
- Graduate Program:
- Data Analytics
- Degree:
- Master of Science
- Document Type:
- Master Thesis
- Date of Defense:
- March 27, 2025
- Committee Members:
- Hajime Shimao, Thesis Advisor/Co-Advisor
Chengfei Wang, Committee Member
Raghu Sangwan, Program Head/Chair
Warut Khern-am-nuai, Committee Member - Keywords:
- High-Frequency Trading (HFT)
Deep Learning
Adversarial Attacks
Algorithmic Trading
Limit Order Book (LOB)
FI-2010 Dataset
Convolutional Neural Networks (CNNs)
Long Short-Term Memory (LSTM)
Fast Gradient Sign Method (FGSM)
Projected Gradient Descent (PGD).
High Frequency Trading
Limit Order Books
FI-2010
Convolutional Neural Networks
Long Short-Term Memory
Fast Gradient Sign Method
Projected Gradient Descent
Finance
DeepLOB - Abstract:
- The financial industry has undergone a profound transformation with the integration of artificial intelligence and machine learning techniques. High-frequency trading, characterized by the execution of large volumes of transactions within microseconds, has particularly benefited from these technological advancements. Deep learning models have emerged as powerful tools for price prediction and trading strategy optimization, offering the ability to identify complex patterns in market microstructure data that traditional statistical methods often lack. Despite their impressive performance, the security implications of deploying these sophisticated models in financial environments with significant monetary stakes remain inadequately explored. As trading systems become increasingly automated and reliant on AI-driven decision-making, they potentially create new attack surfaces for malicious actors. The susceptibility of deep learning models to adversarial examples—carefully crafted perturbations designed to mislead neural networks—raises serious concerns about their reliability in adversarial settings such as financial markets. This thesis investigates the vulnerability of deep learning models in high-frequency trading to adversarial attacks. I examined five architectures (two CNNs, two LSTMs, and DeepLOB) using the FI-2010 dataset for limit order book prediction. After establishing baseline performance through predictive accuracy and trading strategy implementations, I subject these models to FGSM and multi-iteration PGD attacks. Results demonstrated that even minor input perturbations significantly compromise prediction accuracy and trading returns across the models, highlighting a critical security vulnerability in automated HFT systems. This research provides empirical evidence of adversarial threats in algorithmic trading and emphasizes the urgent need for robust model development in financial applications where stakes are exceptionally high.