Where are we in the stock market cycle?

I read that article yesterday. It may be the worst financial article I have ever read. I don't have to agree with a persons opinion, but at least give me one point to substantiate the opinion.
A crazy nut holding a "The World is Coming to an End" sign by the corner of a street has as much facts to support his belief as this lazy writer had.


Sent from my iPad using Tapatalk
I didn't find any misspellings.
:rolleyes:

I'm sure if you pick a random article about finance, it will probably be in contention for the worst ever.
 
Maybe I am being too hard on the guy. Maybe it was Market Watch's fault.... Maybe they called him and said... "Hey Farrel, we need a 2 page article that will get a lot of clicks, and you got 5 minutes to send it to us."


Sent from my iPad using Tapatalk

I read a Farrell article in the early 2000s, and it really put me off Marketwatch. I almost never read him. Unfortunately it looks like he's just gone downhill!
 
I read a Farrell article in the early 2000s, and it really put me off Marketwatch. I almost never read him. Unfortunately it looks like he's just gone downhill!
In the 1980's I hate to admit it but I subscribed to a market newsletter which was pretty popular. One month in the newsletter he started talking about his piano lessons or something close to that. As I recall shortly after that the son took over the newsletter which he runs to this day. The father was probably going a bit senile.
 
Well lets back off just a moment. First, let's agree to have a nice friendly discussion. I'm not in teaching mode here and am very willing to learn a new trick (old dog here). I'm not trying to change people's investing approach as buy-hold is a great choice and I agree that risk aversion should be covered in the AA.

I wasn't trying to be unfriendly in my comments and I apologize if I came off that way.

My comment was directed solely to support the notion that with statistical/predictive models it's very easy to develop one that does exceeding well on the training data (historical data) but not do well in the future. I was not referencing any specific model or anything that you've proposed.


Just referring to stuff I've explored and have no plans to disseminate other than what I've mentioned. Below I've tried to somewhat address the items (above) in blue but I don't know what the items above in magenta refer to.

Parsimony is a term often used by statisticians and just means using a simple model that's not too complex nor with too many variables. One reason they prefer simple models is that it helps one avoid models that do well on the historical/training data but not in the future (also called overfitting).

Using a holdout set means that one takes the historical data and only uses a subset to explore and develop their model. So for example, I might develop my model (i.e. train parameters like how many days the window for the moving average should be) on data from 1920 to 1970 and then test it from 1971 to 2014. If the model is good, it should do well on the data from 1971 to 2014 which it never saw.

Both these methods can help mitigate (but not eliminate) the problem of developing models that do well historically but not in the future.

Second, there is no predicting going on by me. There is only a potential model triggered movement from equities to bonds based on a trend which may or may not turn out to continue forward. This is an important point I think. If the backtest carries forward then one is hoping that 90 years of historical testing will carry forward to at least the next data point (the next month). There is always a point where trends end.

This is somewhat of a side discussion but I think we may be using the word prediction differently. I agree that it's not a prediction in terms of "I think the S&P 500 will be 2200 next year" but it is a prediction of terms of what is the best action that should be taken: i.e. move to equities, move to bonds, or do nothing.

The model suggest specific actions and these can be evaluated against data (historical or future) and compared against another course of action (like buy & hold).

If the model is a decent one, there are very few losses from sell to buy back over some months versus buy-hold. All moving average approaches I've seen do not pass this test. Also if the model is a decent one, there are very few sell-buy pairs. Maybe one every 4 or 5 years.

I haven't followed any of the literature on market timing with moving averages but it's pretty clear that the problem is super-hard.
 
...
My comment was directed solely to support the notion that with statistical/predictive models it's very easy to develop one that does exceeding well on the training data (historical data) but not do well in the future. I was not referencing any specific model or anything that you've proposed.

Parsimony is a term often used by statisticians and just means using a simple model that's not too complex nor with too many variables. One reason they prefer simple models is that it helps one avoid models that do well on the historical/training data but not in the future (also called overfitting).

Using a holdout set means that one takes the historical data and only uses a subset to explore and develop their model. So for example, I might develop my model (i.e. train parameters like how many days the window for the moving average should be) on data from 1920 to 1970 and then test it from 1971 to 2014. If the model is good, it should do well on the data from 1971 to 2014 which it never saw.

Both these methods can help mitigate (but not eliminate) the problem of developing models that do well historically but not in the future.
...
I agree with everything you said, and as you my comments are directed at models in general rather than any specific model.

Just a few additional observations. While the holdout method is reasonable in theory it is actually impossible for a human to accomplish, for the simple reason that it works only if you apply it only once. But the problem with us humans is that we like to tinker. So if the first model, developed on one subset of the data is not good enough, we will modify it and run it again against the holdout data, and we will continue to do this until we get good results with both the back tested and the holdout data. The problem is now that we are just fooling ourselves, we don't have any holdout data, we used it to make the model. We have only succeeded in making us more confident about our model, which is great at back testing may have no ability to forecast.

Another approach which can work, depending on the type of data is to generate random variables with the same statistical parameters as the original data (mean, stdv, autocorrelation, derivatives, etc.) and using the algorithm used to generate the original model, use it on these variables to produce a model and forecast. Run this many times with different random series and you can develop a distribution of prediction results, and you can see how far out on the curve your real model falls. Are you better than 95%, 99% of the random models, something like this.

But here we have exactly the same problem as we have with holdout data. If we don't get a better model than chance, we will keep tinkering until we do. And if we do a hundred models (on average) we will get out to the 99% level.

So in the end, there is actually no practical way to test a model.
 
Photoguy, thanks for the comments and explanations. I'm familiar with the concepts but not so much the lingo.
 
What if it doesn't drop back but keeps going up with only minor corrections? Quite possible given a recovering economy and being out of the market could have a lot of opportunity cost associated with it.

I know people who bailed back in 2008, never got back in and regret it.

Yep, one of my former co-workers who I considered to be very smart financially went all cash and stable value after the crash. I stayed in the market and even added more back to it.

He's about 5 years older than me and he'll still be working for a while, while I'm ER. It's kinda sad because he could've been gone by now too.
 
...
Just a few additional observations. While the holdout method is reasonable in theory it is actually impossible for a human to accomplish, for the simple reason that it works only if you apply it only once. But the problem with us humans is that we like to tinker. So if the first model, developed on one subset of the data is not good enough, we will modify it and run it again against the holdout data, and we will continue to do this until we get good results with both the back tested and the holdout data. The problem is now that we are just fooling ourselves, we don't have any holdout data, we used it to make the model. We have only succeeded in making us more confident about our model, which is great at back testing may have no ability to forecast.
...
Good comment and that mirrors my experience. So the holdout method is not very realistic unless we are talking of maybe a situation that one person applies the model to sets of data and another person independently tests it a new data set. That new data set for market testing would have to include new time periods. I've read that the French-Fama work was tested by others on non-US data sets.

What I've done is to do some modest sensitivity analysis. A decent model will not be too sensitive to the few parameters chosen. I think one can and should optimize the parameters to the best performance for all time periods. But then they should be realistic and assume that the forward time application is probably going to have a somewhat new set of optimized parameters and thus the model will fall somewhat short of past performance going forward.
 
...
What I've done is to do some modest sensitivity analysis. A decent model will not be too sensitive to the few parameters chosen. I think one can and should optimize the parameters to the best performance for all time periods. But then they should be realistic and assume that the forward time application is probably going to have a somewhat new set of optimized parameters and thus the model will fall somewhat short of past performance going forward.

Good point. A lot of people don't realize that the best, most robust, multivariate model will usually not be the one that fits the data the best. No point in fitting the noise. The problem becomes then, how do you determine what is noise.
 
FWIW some market made it to 5000.

Nasdaq Hits 5000 for First Time Since 2000

"The Nasdaq Composite climbed briefly above the 5000-point level for the first time in almost 15 years, marking a milestone in the revival of an index that once was synonymous with dot-com excess." Noted in the Wall str. Journal.

Hmmm, is that good or bad?
 
While the holdout method is reasonable in theory it is actually impossible for a human to accomplish, for the simple reason that it works only if you apply it only once. But the problem with us humans is that we like to tinker. So if the first model, developed on one subset of the data is not good enough, we will modify it and run it again against the holdout data, and we will continue to do this until we get good results with both the back tested and the holdout data. The problem is now that we are just fooling ourselves, we don't have any holdout data, we used it to make the model. We have only succeeded in making us more confident about our model, which is great at back testing may have no ability to forecast.

That's a very subtle but important point. And it's probably one of the top reasons for overfitting in models.
 
Back
Top Bottom