Adversarial Attacks Against Reinforcement Learning-Based Portfolio Management Strategy

Yu-Ying Chen, Chiao-Ting Chen, Chuan-Yun Sang, Yao-Chun Yang, Szu-Hao Huang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Many researchers have incorporated deep neural networks (DNNs) with reinforcement learning (RL) in automatic trading systems. However, such methods result in complicated algorithmic trading models with several defects, especially when a DNN model is vulnerable to malicious adversarial samples. Researches have rarely focused on planning for long-term attacks against RL-based trading systems. To neutralize these attacks, researchers must consider generating imperceptible perturbations while simultaneously reducing the number of modified steps. In this research, an adversary is used to attack an RL-based trading agent. First, we propose an extension of the ensemble of the identical independent evaluators (EIIE) method, called enhanced EIIE, in which information on the best bids and asks is incorporated. Enhanced EIIE was demonstrated to produce an authoritative trading agent that yields better portfolio performance relative to that of an EIIE agent. Enhanced EIIE was then applied to the adversarial agent for the agent to learn when and how much to attack (in the form of introducing perturbations).In our experiments, our proposed adversarial attack mechanisms were > 30% more effective at reducing accumulated portfolio value relative to the conventional attack mechanisms of the fast gradient sign method (FSGM) and iterative FSGM, which are currently more commonly researched and adapted to compare and improve.

Original languageEnglish
Pages (from-to)50667-50685
Number of pages19
JournalIEEE Access
Volume9
DOIs
StatePublished - Mar 2021

Keywords

  • Task analysis
  • Portfolios
  • Perturbation methods
  • Deep learning
  • Training
  • Information management
  • Solid modeling
  • Reinforcement learning
  • adversarial attack

Cite this