Saturday, May 26, 2018

The Dangers of False Knowledge

In a remarkable interview, Professors Marcos Lopez de Prado and David H. Bailey explain that much of what we think we know about financial markets is just plain false.  When we look for so many patterns in markets and finally hit upon one that "works", the odds are great that what we've discovered is a false positive:  the one finding in 20 that happens to test as "significant".  What many traders don't recognize is that traditional in-sample/out-of-sample procedures for determining whether a finding is "overfit" or legit are inadequate.  When computing power allows us to test thousands upon thousands of combinations of multiple variables, it is not difficult to find one permutation that tests "significantly" out of sample as well as in-sample.

What many discretionary traders don't recognize is that they are equally subject to the biases of false knowledge.  Traders will hunt for dozens and dozens of "setups" at various time frames and across many different stocks and trading instruments.  They finally find something that "works" and decide that they have figured out their "edge".  When future results fail to live up to expectations, these traders become frustrated and hire trading coaches who tell them to stick to their discipline.  

You just can't make this sh*t up.

A daytrader who places many hundreds of trades in a year will, by pure chance, have runs of five or more consecutive winning or losing trades.  Interpreting these runs as meaningful, the discretionary trader will increase or decrease risk taking solely on the basis of randomness.  Imagine the trader who trades every day of the week and is ecstatic after achieving five consecutive days of profitability.  He increases his risk-taking based upon his trading "progress" and quickly loses everything he had made.  After five consecutive days of losses, he pulls back his risk taking only to see trades win with smaller size.  

All with no objective edge whatsoever.  

What a waste of life.

So what is the answer to this problem of false knowledge from large data samples?  I encourage you to check out the interview with Professors Lopez de Prado and Bailey.  Links to articles explaining how to compute the odds of backtest overfitting are included.  Also, Professor Lopez de Prado explains a much more basic flaw in what systematic and discretionary traders are doing:  using the same tools to generate ideas as to test them.  He emphasizes:

Backtesting should not take place before a theory has been formulated.

In other words, before we can determine whether or not we have an edge (in systematic or discretionary trading), we need to establish knowledge.  A theory explains how and why something occurs.  Testing of historical data can help us conduct limited, targeted tests to determine whether our theory holds up in practice.  Before we test, we must formulate a plausible hypothesis.  There is no theoretical or practical rationale why many strategies in technical analysis, fundamental analysis, or random combinations of quantitative variables should be valid.

This is a perspective you won't find in most writings in trading psychology.  It isn't good for business to tell people they likely have no edge and are not engaged in processes that can objectively capture edges in markets.  Now more than every, however, the tools are available to help us truly determine whether what we're seeing is random or meaningful.  For traders and investors interested in understanding what they're doing--not simply gambling on market moves--this is a most exciting and promising development.

Further Reading:  

.