Originally Posted by Meadbh
I don't know about the programming, but it seems to me that a Monte Carlo simulation of a scenario that, whatever way you look at it, is likely to have a low probability of success, might indeed show such variation on successive iterations.
Are there any math whizzes around, or programmers? Dory?
Statistics geek with a computer science degree here.
Monte Carlo simulations don't give the exact same result with each run, but as you increase the number of iterations in each run the "variance" between successive runs should, on average, approach zero. Running 10 iterations will generate a lot of variance, running 20 somewhat less, running 50 less still, and so on until the variance is rather tiny.
Once you reach a large enough number of iterations in a single run, additional iterations do very little to reduce the "delta" from one run to the next, and when that "delta" is small enough for statistical purposes, you don't need to run any more.
So if there are sufficient iterations being done in one run, you should have very little variance from one run to the next. You shouldn't be seeing wildly varying results from one run to the next unless for some reason it's using a different set of input variables each time.