Something isn't making sense here. I'm having a hard time following the logic, and an even harder time figuring out what conclusion we could reach following any hoped-for results.
Your original argument was that 4% was too conservative and additional data would perhaps demonstrate this (even though you previously said "I don't know if there are cycles to human behavior over 130 yrs of time").
SGeeeeeeeeeeeeee pointed out that since 4% was an actual outcome, any additional data could not increase the worst case result, only decrease it.
You then said we shouldn't look at the absolute worst case, only the ~1 - 5th percentile results.
By looking at anything less than 100% with the
historical data, we're
already eliminating the true outliers. Even if we set the success rate to 70%, we're still under 5% SWR, so those "outliers" would need to be quite prevalent in the data.
Now you're saying that your goal is to avoid potential outliers in the historical data that somehow happened to be recorded by more than 5% of the historical results but would be washed away by fabricating monte carlo data based on these same historical data. In your words,
The theory offered is that there are core cyclical characteristics of markets and that the 130 yr actual data we have has those characteristics within it, but that the 130 samples presents us with the risk that the 130 samples were "taken" at peaks or troughs of cycles and thus might be extreme.
If the idea is that the timing of the yearly withdrawals is creating an abnormally low result, one can look at taking the withdrawals in February, or March, or April, etc., based on historical data. Intercst did so. At the 95% safe level, results varied a tenth of a percent or so (i.e., 3.95% to 4.05% withdrawals), but there was no hint that the 130 samples taken in January each year created any artifact. In the 30 year runs, averages across the year matched January sampling within 0.02% for the 95% level, 0.1% for the 99% level, and 0.03% for the 100% level.
Obviously there is nothing particular to be gained if the results don't change. Yet another confirming study to well-established findings is good for college exercises, but I'm not sure it advances anything else. So we have to be hoping for a significantly higher SWR. You mentioned a 6% SWR over 40 years, which seems like a good "target" for something useful, beyond the noise level.
But a 6% withdrawal matches up to a 40% success/60% failure rate -- so 60% of the historical examples would have to be outliers...
Let's imagine we did exactly what you suggest, and we obtain a significantly higher result. We are so happy with our outcome that we publish it in the academic/professional journals. The abstract might say...
...
The researchers created a massive set of sample data intended to be "market like" for the purposes of a monte carlo analysis. The techniques for creating this data set were selected to mimic market behavior with respect to any cyclic patterns. (No basis for these patterns is postulated.) Using this data set, the researchers looked at the results of a periodic withdrawal from a portfolio over an extended time. Contrary to the conventional wisdom and published results (see bibliography), the researchers found that 6% of a starting portfolio could be safely withdrawn for 40 years (p < 0.05%). While backtesting of this withdrawal rate shows that the strategy would have failed in approximately 60% of the actual historical periods for which data are available, the researchers believe those failures were not representative of the actual market behavior due to sampling bias or other unidentified reasons, and therefore should be ignored by those attempting to determine a portfolio balance necessary to last them the rest of their lives.