QUANTITATIVE TRADING IN SET MARKET WITH FinRL

Authors

  • Mahatsawat SARAPHANT
  • Somporn PUNPOCHA
  • Bumroong PUANGKIRD

Abstract

The objective of this study is to develop and evaluate the performance of a quantitative trading strategy in the Stock Exchange of Thailand (SET) by applying Deep Reinforcement Learning (DRL) techniques through the FinRL library to create automated trading agents for Thai equities. The study focuses on a sample of 10 stocks listed in the SET index, using historical data from 2010 to 2024. Analytical variables include the Turbulence Index and technical indicators, which enable the agents to adapt their strategies in response to market volatility. The results demonstrate the potential of DRL in assessing investment performance over the period from 2022 to 2024, using performance metrics such as Cumulative Return, Sharpe Ratio, Annualized Volatility, and Maximum Drawdown. The experiment compares three DRL models—A2C, PPO, and DDPG—against the Mean-Variance Optimization (MVO) method and the SET index. Findings reveal that all three DRL models significantly outperform both the MVO portfolio and the SET index. Among them, the PPO model delivers the highest cumulative return at 23.2714%, followed by A2C at 22.1183% and PPO at 21.0826%. In contrast, the MVO approach yields a maximum return of 9.1662%, while the SET index records a negative return of
-16.0943%.

Downloads

Published

2025-07-07