Yes. It is absolutely theortically possible to force appropriate correlations. Rather than choose annual stock return, bond return, and inflation using three unique random numbers for each year in the sequence, you could (and probably should) choose a single random number to describe the movement of each of these metrics in relationship to the previous year's values. This could be done (theoretically) using an approach like you describe -- or several others. It is a much harder problem and I'm not sure that we really have enough real world data to capture all the relationships, but some of us would find the capability fun to play with.
This does presume that a large sample size existed from which samples of relationships between these variables were taken, a correlation coefficient computed and its statistical significance tested. That hasn't been done with a solid conclusion, as best I know, and given that the Fed has been increasing rates for X months now with the market rising to a near all time high . . . how would you persuade anyone that a experimental model should presume higher rates reduce stock prices in 2006 and onward? They certainly seem to on days of announcements, but the last 18 mos are powerful counter evidence.
Now I do think what you're discussing is interesting, but I also think that sort of thing is all about market prediction and pursuit of higher returns, and surely people have already done statistically rigorous correlations of stocks to other variables to seek a predictive edge. OTOH, anyone who found such a thing would, of course, be crazy to publish it. Hmmm.
Note that I have not talked about any variable other than the raw equity market's performance. This does not mean that I should not, but I haven't. I've talked only about increasing our sample size of intermediate term human behavior trading stocks.
My suggestion is clearly more simplistic than multivariate correlation in the frequency (or time) domain. The 130+ yr sample we have could have been an aberration with many end of year numbers at sinusoidal troughs. Days to come could be far rosier. Or the reverse. The SWR from that single sample might be substantially deceptive in either direction. I just want more samples of "market-like" data and I'm defining "market like" as "having an equivalent Nyquist-limited frequency profile". Probably some value in noting that raddr's "1st order" model does yield a sigma around a mean SWR that suggests a different value from the historical data.
No question at all that the hypothesized definition of "market-like" is unproven. No one has shown that a waveform of equivalent frequency "content" is a good definition of "market-like". I know this is only an experiment, and again, I would not even suggest it if the work magnitude looked huge.
I don't think it is likely to lead to changes in the way people choose retirement spending habits. It's not likely to change the fact that a 4% SWR is a pretty good first SWAG and that events outside of the realm of simulators keeps us from getting more refinement. But I think such a tool in the hands of the right person could lead to a greater understanding of some of the issues and risks in retirement.
Don't know how good 4% is from one sample. I'd like to find the time to study raddr's methodology more carefully to determine how he obtained different numbers. But hey, you're right, no question news events could change human behavior . . . well, that's badly phrased. I guess any correlation we look for presumes human behavior does not change, but an asteroid strike is likely going to trump the variance on the waveform produced by human behavior.
Yeah. But remember that everything about the monte carlo simulation -- performance distributions, inflation distributions, correlations, FFT filter variable determination -- is all calibrated using only that historical data. If you determine the mc variables empirically using the historical data, there is no guarantee that it describes anything causally. If you work hard at it, you can guarantee that your monte carlo simulation produces nearly identical probability results as the historical record, but you can't guarantee that it has any more predictive capability than the historical record it is based on.
Dead on correct. We are using a historical record that we suspect of aberration as the source of a frequency profile to which we are molding random noise. And news events can undo everything.
But.
We already have done the work using the historical record on the presumption that it has some value. And we have no control of news. If there is a composite waveform in that data, of multiple frequencies, we could find them and model it and have more samples of market-like data.
If we were sitting here reading this in the year . . . what, 5000 AD or something and we had 100's of 40 year duration samples and discovered that the SWR for those in total was 6%, would we still worship the 4% number? No, of course not. The equivalent work that has produced the 100% certain 4% number would have yielded a different number. That first 130+ yr period would be exposed as an aberration.
Oh well. It does seem curious to me that random numbers yield consistently lower SWRs and rationally, non-contrived, low-pass filtered random numbers yield different SWRs than the historical record. If the frequency content of the historical record was uniformly distributed, I don't think these results would differ. Dunno.