Amen! As a modeler (of something totally different), I think that often too much burden is put on Firecalc for any model to bear.
"All models are wrong; some models are useful" (G.E.P. Box, 1979).
W2R, I've used simulation models in my line of work too, and that quote was also the first thing we were taught in modeling class. However, I just don't think the quote is relevant to FireCalc. FireCalc isn't simulating anything, it is just reporting historical data. When you create a simulation model (generally to interpolate or extrapolate data that you cannot easily obtain), it is best to assume that your simulation model is wrong, that it does not account for every possible interaction, etc. But unless there are actual errors in the data entry, or errors in the calculations in FireCalc, I think we can 'trust' it to report history accurately.
Like the intro to FireCalc - the analogy is to reporting past temperature data for a region. It's just a report, not a simulation of the conditions that led to those temperatures.
FIRECalc: A different kind of retirement calculator
Now, when we take that historic range of outcomes, and try to apply it to the our future - at that point I guess you can say we are running a simulation. But FireCalc isn't doing it - we are. FireCalc was done at that point, and provided a historic baseline for us to use in our planning, as we see fit.
.... IMO, there is a bit of religious zeal going on re: Firecalc
see above - for me FireCalc is what it is. And it is just one data point to use in the FIRE decision. It can't tell me how long I will live, if I'm going to suffer some cataclysmic financial event, if my expenses will outpace historic inflation rates, if the future will be better/worse than history, etc, etc, etc. So it cannot give me an 'answer'.
That is one of the reasons I use a 100% success rate.
I can't think of a single good reason to expect the future to have a distribution of outcomes that is more rosy than the past. As a general rule, as you collect more data (of any kind), it is reasonable to expect to see extremes added to the data set - right?
Plan for the worst, hope for the best?
Again, if those deep drawdown outcomes were outliers, it might be a bit easier for me to write them off as a low probability event. But the distributions of those deep drawdowns look pretty even, so there is a pretty good chance (historically) that we would have experienced a 'bad case' drawdown that is not much better than the 'worst case'.
A 4% SWR is hyper conservative unless you retire right in front of a major market downdraft ... If you could somehow "know" you have ten years for retirement before a major market downturn, your portfolio would grow so high that you could take the big hit and not run out of money.
2B, I know you provided more context for this later, but I wanted to comment on that statement that if we make it past the first few (10?) years without a problem we will be in good shape - I'm not so sure (but one might need to look at the squiggly lines or the spreadsheet data in more detail to check this). Take this scenario:
Default 4% withdraw rate, assume $40K spend, $1M portfolio for easy math. Now, lets just say that after 10 years, your buying power increased by 25% - you are doing well. But if you plug those numbers (same $40K spend, but now a $1.25M portfolio) in for the remaining 20 years, you could see a dip down to $480K - still losing over one-half of your original portfolio.
Now maybe those 'bad case' years only follow big bubbles, and you would be up by more than 25%? I don't know, that would take some more digging in the data. But if it's true, that tells me I may not want to raise my SWR just because my portfolio is growing - I may still have a bad patch ahead of me.
samclam made good points on Monte Carlo vs FireCalc historic analysis, IMO. I suspect that market cycles are not totally random, and that the historic cycles are probably a better guide to the range of outcomes we can expect. Inflation, market returns, fixed returns, etc are related in some ways, they are not totally independent variables, and I think the MC is treating them that way?
-ERD50