Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Writing a framework for running trading strategies is certainly an interesting idea. I too am dissatisfied with most commercial platforms due to lack of features and flexibility. Unfortunately, there seems to be a lack of open source code in this sector. Eclipse Trader looked kind of interesting but the project appears to be dormant now. So expanding on this project could fill that gap.

However, from experience developing and testing algorithmic trading systems I can tell you that your strategy probably has some issues in its current form. I haven't looked into the code but from your description it appears you (correct me if I'm wrong): 1.) Pick a stock 2.) Use PSO to figure out the parameters 3.) If profitable, run the strategy on the stock with the optimised parameters

This means you're making a well known error in the system development community which is curve fitting parameters to historical data. This'll look very good in the simulations, but there is a high probablity that it will break down when trading it forward with real money, because it is optimised for the past. This is why there are a couple of widely accepted best practices when it comes to developing and testing trading systems.

First of all, your system should not have or need too many parameters. As a rule of thumb a robust system shouldn't have more than a handful of parameters and it should ideally show profits in simulations without a great deal of optimisation on those. When optimising make sure that the optimised parameter values are robust. This means that changing the value by a small increment only changes the resulting performance of your system by a small margin (somewhat analagous to numerical stability). If the performance changes by a big margin, then those values aren't robust and should be discarded. Furthermore, don't run optimisation on all of your historical data. Instead, optimise on portion of that data (the 'in-sample' data) and then test the optimised parameter values on the more recent data your didn't optimise on (the 'out-of-sample' data) and see if the performance of your system stays the same or breaks down. Another popular approach is 'Walk forward optimisation' [1] which takes the above one step further by repeatedly optimising and forward-testing on your historical data to find robust parameter values.

Some other things to consider: You need to factor in transaction costs, spread and slippage (the difference between the price you enter the order at and the price at which you get the fill). Transaction costs are easy to determine. Spread and slippage only apply when using market orders and can be reduced by trading with limit order if your system isn't negatively affected by this. Trading with market orders in a fast-moving market may incur siginificant slippage and there are predatory HF algos out there making money from screwing you on your execution. To get a better sense of this, it is considered a best practice to run your simulations on a lower timeframe than the one your system is supposed to work on in order to eliminate inaccuracies in the results. Ideally, you run simulations against unfiltered tick-by-tick data and additionally used bid and ask data series to factor in the spread. This may, however, be overkill and not needed for a system that runs on a daily timeframe, but it may make all the difference for a faster system.

[1] http://en.wikipedia.org/wiki/Walk_forward_optimization



This man speaks the truth.

Particularly running the optimise & test routine on chunks of past data to test for robustness & over-fitting.


_this is the first time I've ever felt like I understood something about algorithmic trading systems. Thank you.


Thank you very much for taking the time to write that out - it was incredibly informative!!




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: