SnowballCamper
Full time employment: Posting here.
- Joined
- Aug 17, 2019
- Messages
- 853
In another thread a poster wrote a good question, and that thread got me thinking more broadly. I've gathered up some thoughts to share, but here's the initial question:
"DCA allows you to buy more on dips and less on peaks, it makes volatility your friend during accumulation.
Can't the process be run in reverse, selling more on the peaks, selling less on the valleys?"
I think the reverse process would be to sell a fixed number of shares each time period, but this yields more money when the market is up and less money when the market is down. It isn't quite selling more on the peaks. In order to sell more on the peaks you have to know it is a peak e.g. you would have to know the future.
That thread started with some conclusions made from a back-test of market data, and many members pointed out the problem with back testing. The problem is that a subset of the data is selected, and conclusions drawn from that subset don't reflect all the data.
Many of us prefer to use Firecalc, and this is much better than back testing from a subset of the data. However, Firecalc is really just a super backtest using all the readily available data over multiple time periods. When we properly interpret the result as a summary of historical data things are good.
The problem is that the result is presented in a single percentage, and percentage is also used to quantify probability. It's just too slippery to slide from "90% of 30 year periods in the data set resulted in retirement success" to the erroneous, "I have a 90% chance of a successful retirement." Reading a percentage should always be followed by the question "Of what?" and answered accurately.
Since we do have to plan for the uncertain future, a look at history is certainly prudent, but we should do more. A popular approach is to do a simulation where one or more market mechanisms is modeled with a probability distribution (random number generator). This approach has the conceptual advantage of recognizing that past performance is not necessarily indicative of future results. However, a common criticism starts with the question, "Well, has that ever happened before?" And of course the answer is no. But the whole purpose of the simulation is to look beyond what has happened and to consider what could happen in a mathematically rigorous way.
Mathematical rigor is difficult, and so criticisms come in sloppy, misleading statements such as, "Since standard deviation applies only to Gaussian distributions, we get breathless reporting of "100 year" market declines."
The real criticism, popularized by Nassim Nicholas Taleb, is that the probability distributions used in the model are unjustified because time and time again reality demonstrates events that were extraordinarily unlikely from any probability distribution with a finite variance. This is the shotgun blast through the "mathematical rigor," but the difficulty remains. To get a glimpse of the difficulty, find a probability distribution visualizer online such as https://statdist.ksmzn.com Then look at the Normal (Gaussian) distribution and compare it to the Cauchy distribution. You can fidget with the parameters to make them look almost the same. You'll find the the variance for the Cauchy distribution is described as "undefined," a euphemism for infinitely large. The standard deviation is the square root of the variance and certainly applies to more than just the normal distribution, as the variance is listed for every other distribution in the visualizer.
So what should one do? Thankfully, a member of the forum pinned me down on this some years ago. I think the best way to evaluate a retirement plan from a quantitive perspective is with the following equation
(Retirement Assets) / (Annual Spending) = Years of funded retirement
But it must come with two questions:
How are you going to grow retirement assets to keep up with inflation?
And How are you going to grow retirement assets to last until you die (presuming you expect to live longer than you have currently funded)?
This approach invites the users to examine what they have, what they want to spend, and in most cases will have a stark difference between how many years are funded, and how much longer they expect to live. It invites many more questions with "How...", but this is a feature. The user has to think about the answers instead of just wondering if ninety whatever percent is good enough.
"DCA allows you to buy more on dips and less on peaks, it makes volatility your friend during accumulation.
Can't the process be run in reverse, selling more on the peaks, selling less on the valleys?"
I think the reverse process would be to sell a fixed number of shares each time period, but this yields more money when the market is up and less money when the market is down. It isn't quite selling more on the peaks. In order to sell more on the peaks you have to know it is a peak e.g. you would have to know the future.
That thread started with some conclusions made from a back-test of market data, and many members pointed out the problem with back testing. The problem is that a subset of the data is selected, and conclusions drawn from that subset don't reflect all the data.
Many of us prefer to use Firecalc, and this is much better than back testing from a subset of the data. However, Firecalc is really just a super backtest using all the readily available data over multiple time periods. When we properly interpret the result as a summary of historical data things are good.
The problem is that the result is presented in a single percentage, and percentage is also used to quantify probability. It's just too slippery to slide from "90% of 30 year periods in the data set resulted in retirement success" to the erroneous, "I have a 90% chance of a successful retirement." Reading a percentage should always be followed by the question "Of what?" and answered accurately.
Since we do have to plan for the uncertain future, a look at history is certainly prudent, but we should do more. A popular approach is to do a simulation where one or more market mechanisms is modeled with a probability distribution (random number generator). This approach has the conceptual advantage of recognizing that past performance is not necessarily indicative of future results. However, a common criticism starts with the question, "Well, has that ever happened before?" And of course the answer is no. But the whole purpose of the simulation is to look beyond what has happened and to consider what could happen in a mathematically rigorous way.
Mathematical rigor is difficult, and so criticisms come in sloppy, misleading statements such as, "Since standard deviation applies only to Gaussian distributions, we get breathless reporting of "100 year" market declines."
The real criticism, popularized by Nassim Nicholas Taleb, is that the probability distributions used in the model are unjustified because time and time again reality demonstrates events that were extraordinarily unlikely from any probability distribution with a finite variance. This is the shotgun blast through the "mathematical rigor," but the difficulty remains. To get a glimpse of the difficulty, find a probability distribution visualizer online such as https://statdist.ksmzn.com Then look at the Normal (Gaussian) distribution and compare it to the Cauchy distribution. You can fidget with the parameters to make them look almost the same. You'll find the the variance for the Cauchy distribution is described as "undefined," a euphemism for infinitely large. The standard deviation is the square root of the variance and certainly applies to more than just the normal distribution, as the variance is listed for every other distribution in the visualizer.
So what should one do? Thankfully, a member of the forum pinned me down on this some years ago. I think the best way to evaluate a retirement plan from a quantitive perspective is with the following equation
(Retirement Assets) / (Annual Spending) = Years of funded retirement
But it must come with two questions:
How are you going to grow retirement assets to keep up with inflation?
And How are you going to grow retirement assets to last until you die (presuming you expect to live longer than you have currently funded)?
This approach invites the users to examine what they have, what they want to spend, and in most cases will have a stark difference between how many years are funded, and how much longer they expect to live. It invites many more questions with "How...", but this is a feature. The user has to think about the answers instead of just wondering if ninety whatever percent is good enough.