Published 30th September 2024.

Introduction

This article aims to provide an overview of one year of personal research in the fields of algorithmic trading and quantitative finance. The primary objectives of this research are to find actionable insights and convert them into profit, as well as to develop a trading workflow and tools capable of performing under various market conditions. We first describe the different research pathways followed before defining the new direction for the coming year. As this article contains direct links to the most prominent articles, it can also be used as a reading guide for newcomers that may be interested in my work.

1. Where the GPT value lies in Finance

The idea of using ChatGPT to analyze the well-known financial data source FinViz seemed innovative in the summer of 2023, though it may seem less so now. It is worth emphasizing that the hallucination risk associated with LLM outputs (namely in this case GPT-3.5) advocates for a more quantitative approach, working directly from FinViz data to isolate relevant stocks based on pure quantitative criteria instead of GPT's Markovian analysis. FinViz data has been used in stock clustering, detection of undervalued stocks, and breakout candidates selection.

At this stage, it appears that using GPT for quantitative-based decision-making may not be the optimal approach. The added value compared to Python hard coding is not significant, and the latter allows avoiding the 'black-box' pitfall. However, GPT has been used as a go-no-go selector on the stock selection output in an attempt to skip any emotional bias before placing orders on the market in this article.

On the other hand, the language processing ability of GPTs has been explored in legal reasoning, and its interest is convincing from my point of view. The versatility of the LangChain framework allows for the development of interesting codes useful in learning and coding tasks. The dual-agent bot, notably, is at the origin of a very successful trading strategy evidenced in the article 'Outperforming the market (1000% in one year)'. I initially coded this strategy in April. I must admit that my subsequent attempts to use the dual-agent coder were disappointing, leading me to prefer a straightforward linear workflow instead to develop code for QuantConnect. Its LEAN engine provides streamlined interfacing with Interactive Brokers and minimal friction between backtests and live deployment. Turn of month effect has been investigated through TLT ETF, as well as SPY and silver mean-reversion properties.

The following graph gives the full overview of the research performed during the past year:

None

2. Algorithmic trading versus classical trading

Using QuantConnect and its extensive strategy library (developed by professional quants or competing students) is straightforward. A certain level of Python knowledge is, however, required to adapt these algorithms to your own risk management and constraints. LLM pair-coding in most cases only assists with this. AI is currently coding poorly, and from my perspective, even with the new models that claim to possess reasoning capabilities, such as OpenAI's o1, the improvements seem only marginal.

Over the past year, I've found that algorithmic trading offers no significant advantage over traditional trading methods. This observation is specific to my experience: my profits from purely algorithmic strategies are lower than those from fundamental or technical analysis. This situation might change in the future and likely varies from person to person. It could indicate that I need to improve my quantitative algorithm skills or that I'm simply better at technical trading.

Despite the limited scope of this finding, it highlights that private traders should weigh the time required to develop, deploy, and maintain algorithms against the time spent gathering fundamental data, reading financial analyses, or conducting technical studies. Regardless of the outcome, the key benefit of algorithmic trading is its ability to eliminate emotional biases in decision-making. Therefore, even though I am becoming more skeptical about allowing algorithms to execute trades independently (clarifying that I'm not referring to high-frequency trading), I believe that the decision-making process should remain fundamentally quantitative rather than relying on unquantified insights.

3. Future Directions

GitHub Repositories The first step is to reorganize the most promising code into GitHub repositories, categorized under the following topics:

Strategies for QuantConnect I have developed several strategies that consistently generate alpha. In my view, QuantConnect remains the most effective and user-friendly platform for discovering market alpha.

Technical Trading In a recent article, I utilized ib_async to generate historical datasets and provided an overview of using the Laguerre filter for technical analysis.

Convolutional Neural Networks Currently, my neural network implementation only matches the performance of a buy-and-hold strategy on SPY. Given my long-term perspective on holding and trading physical assets in a debt-heavy and inflationary economy, I plan to explore neural networks for forecasting commodity prices. This work is still in progress and not yet ready for publication.

AI agents

I almost daily use homemade AI agents, essentially to learn languages. I will probably manage to adapt one agent for fundamental research and market monitoring.

Conclusion

Reflecting on the past year of AI-powered quantitative trading, it's clear that while algorithmic approaches offer unique advantages — such as reducing emotional bias — their practical benefits may vary depending on individual expertise and strategies. My journey has underscored the importance of balancing algorithm development with traditional trading methods, and the necessity of continuously refining both to adapt to ever-changing market conditions.