Optimal FIRECalc Portfolio

NW-Bound

Give me a museum and I'll fill it. (Picasso) Give me a forum ...
Joined
Jul 3, 2008
Messages
35,712
As described in an earlier post on another thread (see this), I described a portfolio on FIRECalc that beats the standard simple 2-component portfolio, in that the better portfolio has the higher minimum value during a 30-year retirement period for the same WR. I also described how this was the same "minimax" problem that is often encountered in engineering.

Just now, I thought about this some more, and played a bit more with FIRECalc. After about 1/2 hour, I came up with a far superior portfolio than the earlier one. See the following FIRECalc screen capture.



Basically, it's a 50/50 portfolio, where the equity half is divided between small cap value and large cap value. The fixed income half is divided equally between corporate bonds, long treasuries, and short-term cash. It's as simple as that.

Here's what FIRECalc says for a 4% WR over 30-year periods.

FIRECalc looked at the 49 possible 30 year periods in the available data, starting with a portfolio of $1,000,000 and spending your specified amounts each year thereafter. Here is how your portfolio would have fared in each of the 49 cycles. The lowest and highest portfolio balance throughout your retirement was $623,158 to $7,680,224, with an average of $2,685,554. (Note: values are in terms of the dollars as of the beginning of the retirement period for each cycle.)

The $623K min value for 4% WR is far superior to the one I got earlier, which was $400K for 3.3% WR. The superior historical return of value stocks is confirmed by this FIRECalc run.

Note that I changed the default 0.18% expense ratio to 0.10% to reflect today's competitive values.

Ideally, the optimization would be done by software, but I do not have access to the FIRECalc engine to wrap an optimization algorithm around it. Hence, I played with it by hand, but believe the presented solution is fairly close to the real optimal one.

So, go ahead and tweak the mixture of this portfolio by adding a bit here, or subtract a little there to see if you can improve much on it. Then, post your better result here.

I really think this is something I can implement in real life!
 
Last edited:
If I recall correctly, it was Bob Clyatt (author of work Less Live More) that had an elaborate asset allocation model that claimed to beat Firecalc by close to a percent withdrawal rate.

He used many asset classes and re-allocated regularly.
 
Last edited:
I think there's a danger here of overfitting to the historical data -- e.g. like fitting a 9th degree polynomial to 10 data points.

One test might be to divide your data into two parts and use one for the optimization and the other for evaluation. I'd be very surprised if the "optimal" portfolio on the first half was anywhere close to the "optimal" on the second half of the data.
 
Wouldn't this be much the same as Mean Variance Optimization (MVO)?
Bernstein on MVO.
Bernstein: Roll Your Own: Become your Own Portfolio Analyst

I am not a student of portfolio analysis, so only have a superficial knowledge of this technique. Pioneered by Markowitz, this technique first coalesces the time histories of different financial components to a set of statistical measures which include their returns, their variances, and their correlations. The components are then treated as random variables, which are blended in a portfolio. The optimal mixture is computed by minimizing the variance for an expected return.

I do not know how often the prices are sampled to compute the statistics. It seems to me besides the daily fluctuations which are like white noise, there's a longer-term and much larger variation due to the business cycle. There are just not that many cycles in the US financial history to get a good statistic model for these models.

The use of something like FIRECalc is a more direct application of historical data. The simulation does not care what the variance of each component is, or how they correlate; it just applies the portfolio composition and sees what the historical result would be. An optimization program would tweak each component a bit sequentially, and observe the effect to know if the direction of tweak is the right one, or it should be reversed. The methods of Steepest Descent and the Simplex Method are the two most popular algorithms for something like this.

So, perhaps optimization using a simulation like FIRECalc and the Mean Variance Optimization may give the same result, but I doubt it. It is because the MVO is a more indirect method (via fitting a limited set of historical data to a theoretical Gaussian model). And once you get a portfolio model from MVO, I think you would want to test it on a FIRECalc-like simulation to see if it works.

And of course, both methods assume that the future will be like the past!

I think there's a danger here of overfitting to the historical data -- e.g. like fitting a 9th degree polynomial to 10 data points.

One test might be to divide your data into two parts and use one for the optimization and the other for evaluation. I'd be very surprised if the "optimal" portfolio on the first half was anywhere close to the "optimal" on the second half of the data.

It is not quite overfitting data. After all, we are reducing several hundred data points (prices of various components over more than 100 years) down to only 5 percentages to build a portfolio. It's really the MVO that is overfitting data, as it seeks to build a complex model described by many coefficients.

Regarding the question that an "optimal portfolio" obtained from 1900-1950 data may not work that well for the period of 1950-2000, that's a reasonable question!

However as the portfolio is fitted over the entire history, we are guaranteed that the result as shown - meaning 4% WR not draining the portfolio down below 60% - will be applicable to that entire history. So, it will also work for any subset. The more serious question is how that will work for the period of 2013-2063.
 
Last edited:
We are all "curve fitting" when we say stocks work in the long run, and whenever someone says "stay the course". ;)

MVO is reserved to describe the method by Markowitz. About MVO method overfitting data and its sensitivities to data perturbations, see the following by Bernstein: http://www.efficientfrontier.com/ef/497/mvo.htm.
 
Why are you using only 49 periods?
 
It was not me but FIRECalc. :)

The data set starts from 1927, and there's only 49 periods of 30-year length, as it said.

PS. I forgot that FIRECalc's simple model has data since 1871, but the value component data started in 1927.
 
Last edited:
It is not quite overfitting data. After all, we are reducing several hundred data points (prices of various components over more than 100 years) down to only 5 percentages to build a portfolio.

I don't know for sure but I don't have good a feeling about this. There are 8 parameters to be fit (the percentages of the various asset classes) and 49 data points (assuming we treat them as independent).

Another issue I think is that 3 of the 8 asset classes (micro-cap, small-value, large value) are there because of the Fama-French studies which basically used the exact same data to help define them. So the asset classes are not independent of the data used for optimization -- there is some circular reasoning going on here.
 
Yes, value stocks worked in the past, and FIRECalc's run simply confirms it. Is it going to work in the future, meaning will value stocks continue to beat growth stocks?

It might, if the intrinsic reason is due to human nature; I think people will continue to chase IPOs and bid up companies going after unproven markets. Only a few succeed and everybody knows about them, while the rest are conveniently forgotten. Buffet makes his fortune while claiming he knows nothing about tech stocks and would not touch them.
 
My own portfolio for the US part is value tilted with a bias towards about 50/50 LV/(MV+SV). So I kind of like your stocks NW-Bound. Unfortunately there are no international choices in FIRECalc, oh well.

The LT bonds feels like a risky bet now with rates so low. LT Corporate could be perhaps very risky in a severe downturn like the 1930's where companies went bankrupt regularly. It's too bad that one of the options was not 5 year Treasuries since it is available in the other spending model choices.

I ran the simulation and got similar but slightly different results.
Code:
Parameters were:
40000 start spending
1000000 portfolio 
30 years
constant spending power model
mixed portfolio -- same numbers as NW-Bound
Results were shown as:
FIRECalc looked at the 49 possible 30 year periods in the available data, starting with a portfolio of $1,000,000 and spending your specified amounts each year thereafter.
Here is how your portfolio would have fared in each of the 49 cycles. The lowest and highest portfolio balance throughout your retirement was $563,881 to $7,455,797, with an average of $2,589,088. (Note: values are in terms of the dollars as of the beginning of the retirement period for each cycle.)
The portfolio low was somewhat lower then previous results but not too far off. I also got a spreadsheet output because I don't trust FIRECalc to report that number accurately (bug?). The actual low in the spreadsheet was $417,194 and occurred in 1932 for the retire in 1929 sequence.
 
The difference of my $623K vs. your $563K is due to my reducing the investment expense ratio from FIRECalc default of 0.18% to 0.10%, as I noted earlier. I already checked several ETFs from Vanguard as well as Schwab and saw that there were several ETFs with ER of 0.10% down to 0.08% and below. These little differences added up.

And I just thought of something else. As we tend to make the run with a 30-yr duration, we will miss the worst effect of the recent market crashes in 2003 and 2009. What happens is that the latest FIRECalc will start a run is in 1982, in order to end it in 2012. This allows our retiree to enjoy the market boom from 1982 till 2000, and build up a big surplus to weather the above crashes. It would not show what a recent retiree actually experienced.

So, I shortened the run to 5, 10, and 15 years to capture the recent lousy market returns. Yep, the minimal value drops to $515K from $623K.

PS. Yes, it would be nice if FIRECalc has foreign stocks and bonds. However, I suspect that their historical returns would be lousy due to WWII, and would not be of help. Going forward is another story of course. This brings us back to the peril of extending the past in order to predict the future.
 
Last edited:
NW-Bound, that is a good point about the 30 year sequence masking the lousy 2000's.

Did you check the spreadsheet results? Several of my runs have shown that the minimum portfolio value in the FIRECalc summary is reported incorrectly -- at least for runs where the minimum value is in the 1929 sequence. I'd like someone to confirm that observation. Seems a significant bug if true.
 
I just remember that ERD50 has observed and reported this before. When using the "Investigate" option, FIRECalc does report the intrarun minimum value. But when making a normal run, FIRECalc reports the minimum of the terminal values, not the intrarun values.

So, some of the numbers I reported in earlier posts are not compatible!
 
So, I went back and used the "Investigate" option to see what WR a retiree can draw and still have a certain minimum portfolio value. I also shortened the run to 10 years to capture early market downturn effects for a hapless retiree.

For $500K minimum portfolio value (starting with $1M), a 60% total market + 40% commercial paper portfolio resulted in a WR of 2.24%. The "commercial paper" choice in FIRECalc had the best result, compared to "Long Interest Rate", "30 yr Treasury" and "5 yr Treasury".

The "optimal portfolio" using value stocks resulted in a WR of 2.56%. It's just a tad better!

If we are willing to draw down to as low as $400K, then the WR numbers are 3.65% and 4.36% respectively.

For a minimum of $450K, the WR numbers are 3.06% and 3.46% respectively.
 
Last edited:
I think FIRECalc is a great tool and I appreciate all the threads people have worked at on this site. It's helped me a lot in planning. For my own rebalance methodology and specific investment approach, I had written a simulator in Excel. FIRECalc helped a bit in checking my math operations. And recently I went back to this tool.

Anyway, with that personal tool I feel that FIRECalc is somewhat conservative. I now think that I can survive a 4% WR if history does not get any worse then it has been since the 1920's. But then I'm 65 so it's time to take a little risk ... inching out cautiously. :)
 
Indeed, it appears that if one has a reasonably balanced portfolio, a 4%WR will see you to the end although there might be some scary years in between. They don't call them business cycles for nothing.

As for me, I will stick with 3.5% WR or less for a while. When I claim SS, if the market holds up till then, I will be flush! Will I then care for a German luxury car that I do not care for now?

I still have not taken a ride or a drive of the Audi S4 that my son recently bought. It surely looked and felt nice. Six-speed manual stick and 333 HP! I feel too old for that.
 
Last edited:
I use several methods to calculate expected returns and SWRs. FireCalc, ORP, Schwab's retirement calculators and Bernstein's real expected return numbers of: FI=0%; LC=4%; and SC/EM/REITs=5%.

I also remember from 1930-2010 a 60/40 port delivered a real annual return of exactly 5%.

Going forward I believe a 60/40 port will have a long term real expected return of 3-4%.
 
So, I went back and used the "Investigate" option to see what WR a retiree can draw and still have a certain minimum portfolio value. I also shortened the run to 10 years to capture early market downturn effects for a hapless retiree...

I have an update to make. Basically, FIRECalc has data for the more complex value-tilt portfolio components up to 1/2005 only.

When I set the run length to 5 years and select the mixed portfolio option, then downloaded the XLS file, I saw that there were 74 records of 5-year each, starting every single year from 1927 to 2000. That last run starting on 1/2000 ends at 1/2005.

When I did the same with the simple portfolio option, I saw a total of 138 records, starting each year from 1871 to 2008. The last run ends at Jan 2013, so this data set is up to date.

Just so you know that the WR numbers I reported earlier did not contain the 2008-2009 Great Recession effects as I thought.
 
We are all "curve fitting" when we say stocks work in the long run, and whenever someone says "stay the course". ;)

MVO is reserved to describe the method by Markowitz. About MVO method overfitting data and its sensitivities to data perturbations, see the following by Bernstein: http://www.efficientfrontier.com/ef/497/mvo.htm.

Has anyone read the article above by Bernstein? The article is entitled "The Thinking Man Ouija Board", and has this as conclusion.

Financial analysts and investors have been conned by MVO's complexity and elegance. It's [sic] failure is reminscent [sic] of communism's. Marx's system fails because of the flaws inherent in human nature: Markowitz' system fails because of the flaws inherent in economic forecasting.

Bernstein sounds either skeptical or disillusioned with the method that promised the "efficient frontier" in investing. I wonder what happened.
 
Last edited:
Bernstein sounds either skeptical or disillusioned with the method that promised the "efficient frontier" in investing. I wonder what happened.
He started to talk this way after the 2007-2008 crash. I agree with some other posters here: He saw his clients get frightened and sell, and recognized that any plan that disregarded this behavior was not realistic, and a plan that is not realistic is not optimum (regardless of what the backtesting shows). So, he started to talk about annuities and the need for absolute safety for enough of the portfolio to pay for minimum essential spending. Which may not allow a person to actually retire at all, and may not be required based on what the US market has produced historically, but it makes the life of an investment advice writer or FA much less stressful ("the frightened calls keep coming!) if markets take a downturn. As a result, I don't think, at this point, that his interests are exactly congruent with mine.

This is one area where we (in general) have an advantage over an FA or someone running a pension fund, etc. We are building an individual plan just for us. If we "know ourselves" we can build a better plan (for our needs) than a "one-size-fits-all" approach--the plan doesn't need to "protect us" from behavior that we are not prone to. No advisor can ask a few questions about risk tolerance and really understand what a client will do when the market goes south. Conversely, only by building the plan can "the client" fully appreciate the assumptions that underpin it. That's valuable knowledge when hard times hit.
 
Last edited:
Samclem, that was certainly the reason Bernstein now advocates people putting the principal for essential expenses into something safe, and only invest the left-over in the market. That portfolio is far from the Efficient Frontier.

However, back to the problem with MOV method by Markowitz, in my opinion, is that it tries to build an elaborate formal Gaussian model from skimpy back data, then runs an optimizer on this delicate model that is highly susceptible to spurious data points. In the end, you would want to take an MOV-derived portfolio back to a historical simulation to see how it would work. On the other hand, an optimizer based directly on a historical simulation is its own back-tester. It is a lot simpler to understand than the elaborate math behind MOV, which may be equivalent to curve-fitting a 12 degree polynomial to 10 data points.

Of course, anytime you base a future prediction on past data, you are betting that history will at least rhyme, if it does not exactly repeat, to borrow from Mark Twain.

And by the way, I used a lot of engineering optimal theories and methods in my career, but I dealt with inanimate objects that generate very true-to-theory Gaussian noise. I can also collect many hours of data from sensors that have sampling rates in the hundred to kilo samples per second to build error models. That's millions of samples, and then we also need to collect data at different temperatures, sensor orientations, etc... Sometimes there are still unknown variations that are called turn-on-to-turn-on random errors that we could not identify the cause.

It's a far cry from the 142 years of stock market data, which constantly evolves with historical and political events.
 
Last edited:
Back
Top Bottom