Efficient Frontier

dallas27

Thinks s/he gets paid by the post
Joined
Jun 14, 2014
Messages
1,069
So I'm exploring efficient frontier math, trying to understand what profound ideas it can teach me.

I've captured the adjusted price history of all the etf's i use in my portfolio, and from that built all the math needed to run 10,000 simulations of different weights with standard deviation and returns. I scatter plot those 10k points and use the max return/risk ratio to plot the tangent relative to a 1% return on a risk free portfolio.

In the end, I have a gorgeous, complex chart with consistent math and what not, but I am realizing this math is backward looking and really just favors high returns and low correlation. Other than quantifying it, nothing new.

To those of you with deep financial backgrounds, what value do you from EF calculations, especially looking forward, and what assumptions do you make in such scenarios?


Sent from my iPhone using Early Retirement Forum
 
The only value that I have gotten is that when I plug in a different portfolio than mine, that portfolio is NOT on the efficient frontier, so since my portfolio is on the efficient frontier (at least according to the Personal Capital tools), I am happy.

Other than that, I pick an asset allocation that is on the efficient frontier that I am comfortable with and that has an estimated annual return that I am also comfortable with.

One could figure out what they need for a return going forward and see on the efficient return graph what their asset allocation should be. They would see what their risk would have to be. If their would-be risk is too high for them, then they will need to re-think that particular return.
 
...In the end, I have a gorgeous, complex chart with consistent math and what not, but I am realizing this math is backward looking...

The Efficient Frontier presumes that market returns of various assets are random processes that are stationary. Without this attribute, all bets are off.

See: https://en.wikipedia.org/wiki/Stationary_process.

And then, the method of determining the Efficient Frontier from past market results, called MVO (Mean Variance Optimization), is fraught with sensitivities to fluctuations in the data; we simply do not have enough historical data.

In a past thread, this subject had come up, and this is what I contributed back then.

See also: https://en.wikipedia.org/wiki/Harry_Markowitz.

We are all "curve fitting" when we say stocks work in the long run, and whenever someone says "stay the course". ;)

MVO is reserved to describe the method by Markowitz. About MVO method overfitting data and its sensitivities to data perturbations, see the following by Bernstein: http://www.efficientfrontier.com/ef/497/mvo.htm.

Has anyone read the article above by Bernstein? The article is entitled "The Thinking Man Ouija Board", and has this as conclusion.

Financial analysts and investors have been conned by MVO's complexity and elegance. It's [sic] failure is reminscent [sic] of communism's. Marx's system fails because of the flaws inherent in human nature: Markowitz' system fails because of the flaws inherent in economic forecasting.

Bernstein sounds either skeptical or disillusioned with the method that promised the "efficient frontier" in investing. I wonder what happened.
 
Last edited:
Bernstein showed that, for example, for historical results for international vs. US, the ends swap but the nose of the curve is still in the middle, giving lower standard deviation than either alone and higher returns than the lower of the two (which switch from time to time). Another lesson in diversification.

Of course, This Time It's Different :LOL::LOL::LOL:
 
Like others, I played around quite a bit with the efficient frontier, honing my Excel skills - I also did the Monte-Carlo pick-a-random-AA and produce a scatter-plot. At least that's what I did before I discovered the Excel optimizer. Either way (especially with the scatter plot) you'll find that once you go beyond 2 assets, there are a host of AA's that give nearly the same results. But some results just might not make intuitive sense and it might lead to an overly fine-tuned portfolio that's a pain to maintain and rebalance while adding almost no improvement over a more basic portfolio with fewer assets.

Anyway, since Bernstein is being quoted, let's add John Bogle when he asks "which efficient frontier?"

https://books.google.com/books?id=Z...john bogle "which efficient frontier"&f=false

His point is that what lies on the efficient frontier really depends on the timeframe you're looking at. Many of us who do this either look at it for the entire timeframe that assets existed (using Shiller's data, for example), we might add Fama & French's data if we're tilting small-cap, Simba's spreadsheet which only goes back to 1972, or whatever we can get our hands on. And of course, in retirement we're generally more interested in the sequence of returns than just pure CAGR and STDEV.

Today we're in relatively uncharted territory with extremely low interest rates and several forward looking predictors showing low returns for stocks for the coming decade or so. Are we pushing out the tail of the existing historical distributions? No way to know for certain until we've lived it.

As for me, I've been using some of the MS allocation indices as a guide. I generated my AA based on that, then ran an efficient frontier to see if I am at least close to the edge with my actual choices for each of the assets and then I call it a day until rebalance time.

As noted earlier, we are all dataminers to one degree or another - otherwise we wouldn't choose "stocks for the long run", pick AA's like 60/40, etc. :)
 
Interesting concept. One flaw is that correlations change over time, rapidly in a downturn.

Correct - the very definition of correlation assumes a timeframe. In the history of stocks and bonds, the correlation is near 0. But you can also do, say, a rolling 5 year correlation between the two assets and see some interesting results as the correlations move from nearly -1 to nearly +1 over the years.

The question is do we invest based on the total timeframe that assets have existed (very long term) or do we invest based on where we currently think we are (shorter term) and what makes the most sense to do to try and make our portfolios last as long as possible during retirement - Man, I wish I had the answer to those questions! :LOL:
 
... once you go beyond 2 assets, there are a host of AA's that give nearly the same results. But some results just might not make intuitive sense and it might lead to an overly fine-tuned portfolio that's a pain to maintain and rebalance while adding almost no improvement over a more basic portfolio with fewer assets...
As the Wikipedia link I provided earlier points out, unless we constrain the percentage of each asset in a portfolio to between 0 and 100%, an optimizer can easily find an optimal solution that leverages one asset while shorting another asset.

This would be mathematically correct, but data mining to the extreme and will lead to great disappointment if the future just slightly deviates from the past. Just because a computer program spits out a number does not make it right.

Today we're in relatively uncharted territory with extremely low interest rates and several forward looking predictors showing low returns for stocks for the coming decade or so. Are we pushing out the tail of the existing historical distributions? No way to know for certain until we've lived it.

As for me, I've been using some of the MS allocation indices as a guide. I generated my AA based on that, then ran an efficient frontier to see if I am at least close to the edge with my actual choices for each of the assets and then I call it a day until rebalance time.

As noted earlier, we are all dataminers to one degree or another - otherwise we wouldn't choose "stocks for the long run", pick AA's like 60/40, etc. :)

The idea of MPT (Modern Portfolio Theory) remains valid in the qualitative sense. Diversification should reduce fluctuation while having a good chance to improve the long-term return. But if we optimize down to the last 1% of a portfolio mix using past data, we are kidding ourselves.
 
As the Wikipedia link I provided earlier points out, unless we constrain the percentage of each asset in a portfolio to between 0 and 100%, an optimizer can easily find an optimal solution that leverages one asset while shorting another asset.

This would be mathematically correct, but data mining to the extreme and will lead to great disappointment if the future just slightly deviates from the past. Just because a computer program spits out a number does not make it right.



The idea of MPT (Modern Portfolio Theory) remains valid in the qualitative sense. Diversification should reduce fluctuation while having a good chance to improve the long-term return. But if we optimize down to the last 1% of a portfolio mix using past data, we are kidding ourselves.

Completely agree with all of the above. Also, I'm not a leveraging type, so I've always constrained to 0% and 100%. Just as an aside, I usually use the "evolutionary" method which forces you to constrain each variable individually. But the main reasons I use it is because it tends to not get trapped in local minima the way the other methods do and it almost always arrives at a solution since these problems are so highly nonlinear. Finally, once done, it's always useful to go back and do a sensitivity analysis on the target AA to make sure that the thing didn't optimize on a knife edge.
 
The magic 8 ball might be a more efficient tool:D
 
One big part of conventional EF's is that they are looking for risk-adjusted gains. You're supposed to leverage the result to reach the expected gains you're targeting. If you don't use leverage, you may be trading gains for lower volatility at the "optimum" point.

I'm not particularly concerned about risk, so I use more equities to boost expected gain without leverage.
 
Back
Top Bottom