AI powered active management (AIEQ)

It's been 4 years since this thread was started. Because of the thread on SDC (Self-driving car) which uses AI (Artificial Intelligence), I recall this AI-driven ETF and look up its performance.

If you invested $10K in AIEQ in Jan 2018, its inception almost 5 years ago, you now have $13,645 vs. $16,538 if you chose VFINX (Vanguard S&P 500). Both numbers include dividend reinvestment.

In terms of CAGR (Compound Annual Growth Rate), AIEQ is 6.53% against 10.77% for S&P.

Is the AI stock trading computer getting better? It does not appear so, as the 2022 YTD performance is -25.32%, against the drop of -13.22% by the S&P.

Source: https://www.portfoliovisualizer.com

Wow! Thanks for this. I always doubted if I'd ever go the AI route but I always wondered if I might be missing out on something.
 
Not all AI is correct.
Not all technology is good.
Not all "models" are correct.
 
Not all AI is correct.
Not all technology is good.
Not all "models" are correct.


Hmmm...

Why doesn't the management of AIEQ use good and correct AI, in order to make money instead of lose it? :angel:
 
It's been 4 years since this thread was started. Because of the thread on SDC (Self-driving car) which uses AI (Artificial Intelligence), I recall this AI-driven ETF and look up its performance.

If you invested $10K in AIEQ in Jan 2018, its inception almost 5 years ago, you now have $13,645 vs. $16,538 if you chose VFINX (Vanguard S&P 500). Both numbers include dividend reinvestment.

In terms of CAGR (Compound Annual Growth Rate), AIEQ is 6.53% against 10.77% for S&P.

Is the AI stock trading computer getting better? It does not appear so, as the 2022 YTD performance is -25.32%, against the drop of -13.22% by the S&P.

Source: https://www.portfoliovisualizer.com

I wonder, what if this black box called AIEQ went skyward? Would this thread be many more pages longer?

Once a friend mentioned investing in some sort of Schwab black box. When the market went down so did the black box. He lost faith in the black box.

Unfortunately much of investing is about having faith in some technique. But hopefully you understand the foundations of that technique.
 
We don't know exactly what goes inside this AI black box for trading stocks.

But it is well known that even the better and much more benign AI systems can have some weird results, as demonstrated by MIT a few years ago. I shared that here on this forum, and just found it again.

I would think that this object ID AI has an easier job than the stock trading AI. It can be fooled by something that even a 2-year-old would not fall for.

 
Last edited:
Who needs gun stores to buy rifles? Just get a toy turtle and you are good to go!
 
We don't know exactly what goes inside this AI black box for trading stocks.

But it is well known that even the better and much more benign AI systems can have some weird results, as demonstrated by MIT a few years ago. I shared that here on this forum, and just found it again.

I would think that this object ID AI has an easier job than the stock trading AI. It can be fooled by something that even a 2-year-old would not fall for.


I'm confused - what changed to make the AI go from turtle to rifle? The top looked the same to me, just a couple screw holes on the bottom?

OK, I opened two views side by side, had to hone in on similar frames. Some unnatural, but subtle changes to the turtle were made. Mostly, the symmetrical patterns were made unsymmetrical. They should have shown them side by side. I'm too lazy to take a screenshot, it's late.

-ERD50
 
Last edited:
Mostly, the symmetrical patterns were made unsymmetrical...

I believe this was the key point, although the AI software was confused at the side view also.

All this goes to show that this AI software is not that robust. It may appear to work well, but is quite fragile.

Would you trust your life with something like this? I am asking as an engineer.
 

Attachments

  • turtle.JPG
    turtle.JPG
    89.2 KB · Views: 15
I have shared examples of how AI object identification can be fooled in a thread on Self-Driving Car. There was not much interest. People just did not appreciate the potential danger in that application. When this is brought up in a thread about AI and money, well, people are more interested in money matters.

While I have your attention, how about another example of AI messing up? Can you see the danger of the identification failure here?

 
Last edited:
I could give some more examples that involve computer models failing. AI is just one implementation method of a model.

However, since those examples are not money either, they are too controversial to discuss on this forum.
 
I have shared examples of how AI object identification can be fooled in a thread on Self-Driving Car. There was not much interest. People just did not appreciate the potential danger in that application. When this is brought up in a thread about AI and money, well, people are more interested in money matters.

While I have your attention, how about another example of AI messing up? Can you see the danger of the identification failure here?

...

We were away in Canada, and I got an alert about a person in my computer room.
So logged in to look and saw nothing...
After a few alerts, I realized the AI was thinking a blanket that had rolled down in the partially open closet was fooling the AI into thinking it was someone hiding in the closet.

So I had to ignore the alerts for the rest of the trip from that camera as I couldn't tell it to ignore that area/spot in it's view.

I should rename my camera to "dummy". :LOL:
 
AI or specifically neural network has done some amazing things. Back in the 90s, my megacorp had some R&D projects looking to use voice recognition as a way for pilots to talk to the avionic suite. The idea is that for military vehicles, in volatile high-stress combat situations, freeing the pilots from fumbling with keypads for data entry would be highly desirable.

I was not involved in these projects, but heard from friends that the technology then was too elementary and unreliable to work. They said the pilots would be more stressed. "Oh sh!*. That's not what I said. Noooo!"

And now, I am quite impressed how well voice commands work with dirt cheap everyday electronics. Most of the time.

Still, for applications that can mean life or death in case of an error, one does not jump into it head-first. That would be highly foolish, as quite a few have done.
 
AI or specifically neural network has done some amazing things. Back in the 90s, my megacorp had some R&D projects looking to use voice recognition as a way for pilots to talk to the avionic suite. ...

And now, I am quite impressed how well voice commands work with dirt cheap everyday electronics. Most of the time.
.... .

I'm also very impressed with current voice recognition, pretty amazing. And for the non-technical types, voice recognition is much, much harder to do that text to speech. Though, like almost everything, getting the last 10% of text-to-speech to be really good isn't so easy.

But in the case of pilots, they could use a very limited vocabulary, so the recognition would be much easier (but apparently still not enough at that time).

-ERD50
 
But in the case of pilots, they could use a very limited vocabulary, so the recognition would be much easier (but apparently still not enough at that time).

-ERD50

True. Yet, my friend said that they trained the system extensively to recognize the voice of each test pilot individually. And it took a lot of training in the combat simulator.

The project led nowhere.

I would not be surprised that they would be trying it again now.

But back to the thread topic...

We see a lot of successful pattern recognition applications in voice recognition, object image identification, hand-writing reading, etc... How to do the same with stock picking?

One thing that sticks out in my mind is to make money off bubbly stocks. Imagine the money that can be made by piling onto bubbly stocks early, then go short when the bubble is pricked. :)

Now, a lot of people have tried to do that, and not all are successful. Or if they are, they cannot do it consistently. So, how do we train a computer to do it better than we can?

Remember that in the successful AI apps, when we say the computer is good, it means we are impressed at what it can do, not that it beats the human (yet). And when it fails, it fails miserably.

Still, trusting the computer to pick stocks is just a benign thing. If you go broke, well it's only money and not life and limbs, unless you go jump off a high-rise window.
 
Last edited:
I would trust the computer to play the next move in chess. But that is a very well defined situation i.e. only so many pieces and movements and only 64 squares.

When I finish an online game the computer (Stockfish) analyzes the game and shows all the blunders. My last game which I won as black showed a 73% accuracy. In 27% of moves I did not play the "best" move.

No way would I trust the computer with my money. :)
 

Latest posts

Back
Top Bottom