Ep. 57 AI, Bias and Financial Decision-Making
THE FINANCIAL COMMUTE

Ep. 57 Ai, Bias and Financial Decision-Making

Ep. 57 Ai, Bias and Financial Decision-Making

THE FINANCIAL COMMUTE

In this week’s episode of THE FINANCIAL COMMUTE, host Chris Galeski and Wealth Advisor Mike Rudow discuss artificial intelligence and its potential impact on the financial industry/other fields.

Mike emphasizes that AI is a large language model with access to data, which can be flawed due to inherent biases. Because it still has a long way to go in making totally accurate conclusions and analyses, it is not recommended to take financial advice from AI. However, Chris and Mike acknowledge the effectiveness of using AI in conjunction with human expert knowledge. Studies show that AI can frame medical diagnoses in a more empathetic manner and help professionals across all fields conduct research and create content more efficiently.

As the world continues to evolve in the direction of AI-powered tools and software, it is important to be aware of the positive implications of this technology while still being intentional and cautious about how we use it.

Watch previous episodes here:

Ep. 56 Can We Thrive Amidst Uncertainty: War, Rising Rates, Political Chaos

Ep. 55 Embracing Uncertainty: Insights from Our Investor Symposium

Hello, everybody, and thank you for joining us for another episode of THE FINANCIAL COMMUTE. I'm your host Chris Galeski, joined by Wealth Advisor Mike Rudow. Mike thanks for joining us and sorry I always have a hard time calling you Mike Rudow because I call you Mikey all the time.

I know you talked a lot about AI and the future of AI at the symposium, not just in our business, but in general. So there's some key takeaways from that. But The Wall Street Journal had an interesting article over the weekend and the headline was, “Can I replace your financial advisor?”

And it said, Not yet, but wait. Yeah. Before we get into like all of the useful things with AI in this technology and there's a lot of them and there's some issues with it too. One of the pain points that I have is that I already feel like as a society we're getting too headline driven anyways. And if it was in Google, it's, you know, like the word of the gospel, right?

And so my problem with that is that if this technology is taking any sort of that B.S. that's out there in the world as fact and it's coming back with answers, there's some real danger that could come with that.

Absolutely. And I think when you think about AI being implemented or replacing our role. Right. You got to you got to take a step back and really remember what is what that is. Right. It's a large language model that it has access to a tremendous amount of data. And it takes all of that data and it pulls it together and it creates processes so that it can make its own decisions by summarizing that data.

But the data is inherently flawed because there's a tremendous amount of bias in the data that's out. It's raw data from the public.

Without a doubt. And, you know, I think the article points to a couple of things, but it really catches your attention because early on it says, hey, you can use this technology to pass the bar exam then, right? And there are medical professionals using it to validate, you know, their diagnosis, which kind of scares me.

Well, I think so when you look at it that way, for the medical professionals, they've been using some sort of AI for a long time because what it allowed them to do is access all of the medical information. Now, this is a defined set of information. It's not just pulling from all of the information that's out there. They take a refined set of information and they'll be able to look at a list of possible diagnosis from certain symptoms.

So now they're cutting down their time to where they could really focus on, okay, if it's one through 12, this one makes the most sense as opposed to it could be anything. And they've got to spend a lot of time kind of refining that. So, yeah, it really is useful as a hybrid model and the medical profession has been kind of at the forefront of showing how it can be implemented into a profession to where it's not replacing it, but it's enhancing it.

Yeah, and there are some good takeaways not only from this article, but other studies that have shown how the medical profession has leveraged that came across an article not long ago actually read it. It wasn't a headline, but it mentioned how there was a research report of a study of, you know, a few thousand people over in Europe.

And it was just published maybe 24 or 48 hours prior. And there was a doctor saying, hey, I'm seeing these symptoms. What could this be? And the technology actually pulled base off those symptoms. This research report that had just been published and put it to the forefront and they were able to use that to better diagnose their patient.

And so we like access to information, but then getting that information quickly. So, again, the A.I. stuff are catchy, but the the output is only as good as the input.

Absolutely. I think that's that's where we need to really be careful on how we implement it and what data is being access to provide that output. And I think bringing it back to right can replace a financial advisor or think of what a financial advisors role is like. What do you think the most important thing that we do is?

I mean, you've got to understand what a client's, you know, needs. What's goals, fears, comfort level is. I mean, it's all the intangibles as opposed to the data set, right? Because how someone reacts or responds to volatility or certain events more often determines the outcome as opposed to the other thing.

Absolutely. And I think where we could see AI being a tool for that is being able for us to identify the client's needs, wants ambitions, goals, right then developing a, you know, a risk portfolio for that client or risk profile, sorry for that client. And then from there strategizing on an optimal portfolio, we can use AI to help build those models, to help streamline processes so that we're able to be more efficient and more customer focused client focus so that we get spend more time with them and really becoming a part of their family and understanding how we can help them get the most life out of the world as opposed to spending more time in charts and in data. So absolutely. Will it be effective as a hybrid model? Yes. Will I be able to replace the relationship that we're building with our clients and how we're we're, you know, spending our time trying to to get them to accomplish their dreams? I don't see that as I'm not worried about it.

Yeah, I'm not I'm not worried about it at all. I'm actually looking forward and will embrace the technology. I know Megan, our chief investment officer, sitting there saying, hey, hasn't anybody seen Terminator? Like, I kind of know how bad. Yeah, I don't know if it's going to it was going to go that way, but I'm excited about the technology.

I use it, I would say on a weekly basis for a number of different things. Help save time. Yeah, but again, I'm using more of a hybrid model, asking it things that I already know the answers to, so that way I could create some sort of content out of that. But one of the interesting things about the article now, I think all of us that have been to the doctor have experienced poor bedside manner, you know, from a doctor.

They're very clinical, they're very factual. You know, one of my one of my closest friends is a doctor and insist it's funny, the lack of empathy. This article in the article, it said that there is a competition between what medical professionals and then Chat GPT to not only diagnose but give the diagnosis and an empathetic response.

And then a panel of professionals judged.

4% of the answers from the actual physicians were noted as empathetic or really empathetic.

So basically it was stock standard. Hey, here's the diagnosis. Good luck. All right. Not quite that, but from the Chat GPT model, you know, over 40% were considered empathetic and and trying to further meet the needs of the patient, you know, and that is largely due to the data that it's pulling. Right. Which shows that humans naturally are empathetic and where doctors might be more in a stressed environment trying to get to the black and white of it, because they a lot of times doctors are overworked.

So I think that was a big takeaway for me, too, because how is that going to affect our industry? Yeah, right. And empathy is a big deal for us because markets are volatile and we as a firm might not invest in the public markets like most financial advisors, but we do still have the volatility from the investments that we haven't been in the public markets.

So I thought it was interesting that we could even utilize Chat GPT in a way that would still show our empathy to our clients and even maybe accelerate that a bit.

Look, I think, I think it can definitely help. You can maybe run by a response and say, Hey, can you make sure that this is more empathetic and then you can come edit it as well. Besides the empathy part, which I thought was the key human element, you know, that this technology would need to overcome to be able to replace a human.

The other one was biases, right? Humans are known to have biases. And we've said for many, many years, you know, working together that it's not our job to put our values on our clients money. It's our job to work with our clients around their values, implement the decisions that they need to with their money and the decisions that that they would like for their lives in their future generations.

And so I was always thinking, maybe this technology can remove the bias from some of these decisions that humans have. But no, it actually it actually says it increases it even more because this is very black and white. There is no gray area when it comes to this technology and it makes this technology overconfident in the advice that it give.

Yeah, absolutely.

And I think the the example they gave was spot on where if you look at short term volatility and you just took the last year of market returns and market data and you put that in front of a client and said, okay, here's here's what the market saw in the last year. How do you want to allocate your portfolio?

People were way risk off even if they had a 30 year time horizon where they know they're not touching that money for 30 years until retirement, the average portfolio size was 40% and in equities and 60% in fixed income, whereas where they put the same data in front or and now a new set of data that was a long term returns for the market and they asked the same question, how do you want to allocate the portfolio?

People went to the other side of the spectrum. We were 90% in equities and 10% in fixed income. So it shows that when people are scared, they get reserved and they tend to invest with less confidence and take risk off.

The risk off the table.

I mean, and to be fair, right, if you're a person and somebody said, hey, well, the market averages 8 to 10% a year over however many times you're going to say, look at I want 8 to 10% a year. And I'm comfortable with that. And that sounds good to me. You're going to be more inclined to want to for more dollars in that space.

And then if I said to you in that journey to get that 8 to 10% that you're going to average over a long period of time, you have to be willing to have a down 50 or an up 50 year within a 12 month period. Like you're you going to be comfortable seeing your million dollars, be 500,000 or 1.5 million in the next 12 months, and your answer is no, you're going to have less stocks, right?

So that's the silliness with that, right?

Yeah. Yeah. You can't just make a decision based on that. It's going to be a deeper conversation. It's going to be understanding, you know, what the fears are, what keeps them up at night know. And that's where I don't think that I will ever be able to replace an investor.

One of the biggest takeaways that I took from this is health, financial literacy in our economy or even the world. It's just not where it needs to be. All right. We need to do a better job of educating and empowering people on choices that they need to make around money and finances and debt and access to a financial advisor is sometimes not there for every single person.

But this technology could potentially get to a point where, as people need to make decisions, if the data set gets good enough that they have access to maybe really good advice that they were not able to get somewhere else, they still might need a second opinion or an advisor to overlook that right? It levels the playing field and we saw that if you listen to society, nice presentation or am I in the education space? It's tremendous. How now whether you're living in a remote area or an inner city kid with less access to, you know, the proper necessities to to have a quality education is going to allow you to have the same access and the same type of education materials as anywhere else in the world.

It's the same for financial advice, where our industry could be called flawed because we're incentivized to work with people who already have wealth accumulated. So the people that are getting the advice aren't always the people that need the advice the most. Where I can change that model in the sense where advisors can take on more relationships, that that's the target because they can accomplish more in less time.

So they can impact more people by having an amoral hybrid where they're overlooking what the recommendations are, getting those recommendations out to people who need advice and kind of having that dual system. You know, so I really do think it levels the playing field in the sense that people that need advice will have access to place.

I was on Chat GPT and I wrote in, you know, I've got a net worth of two and a half million dollars. The majority. Yeah. Thank you. Yeah, it's fake epidemic, but I went into the software and I said, Look, I've got a net worth of two and a half million dollars.

The majority of it is in a trust account, maybe $500,000 in a retirement account. I'm still working and saving, making about a half million dollars a year. I'd like to minimize or avoid taxes as much as possible. And it took about 30 to 30 seconds to a minute to give me a response. It said I should consider a charitable remainder trust, a family foundation and, you know, more aggressive tax strategies and write off.

Okay. And I thought to myself, it's like, okay, it didn't know my age. It didn't know what my income needs were. And it recommended a charitable remainder trust in a family foundation. I'm not sure the net worth the two and a half million dollars and at the U.S. every year and again I thought I'd would be so eager just to give up access and control the most money in a family foundation, which is likely to do a lot of good.

Yeah. Or a charitable remainder trust where the dollars are going to go benefit charity. Yeah. I'm not so sure that that was the best advice.

No, probably not. Especially not at this moment in your life. Right. Right. Which is why having an adviser that would overlook, you know, that type of advice and then tweak the recommendations would be really important if that were the model that were being used, you know, for financial advice. And the article you know, one last thing, going back to the article, when you think about best interest or consistency.

Or the fiduciary.

The fiduciary role, right. We act as fiduciary. Fiduciary is to our client to make sure that their best interest is always top right, taking out bias. And when I was implemented to make a decision for a client, what they what they noticed was that it didn't always take the client's best interests in mind. What it did was it looked at funds that might have had the highest marketing budget, right?

That were pushed the most. They hired a faster rate, higher sponsor money. And that, you know, if the goal is to for a client might be to keep the lowest cost fund an index against the market. Right. The recommendations that were coming out weren't in the best interest of the client, which I found interesting, because that's not what you would assume from a large language model.

Right. And another point, too, was that when you think of the accuracy part of it, there was a really, really good example of typing in Chat GPT: I want to compare Fidelity Fund to a Vanguard S&P 500 index fund so I can see which one I want to go to. Chat GPT formulated an answer, spit out a response with with confidence, saying, you know, you should go with the Vanguard fund because, you know, the long term returns, the fees.

And when you look at the fund that it used was actually a Nasdaq fund for Vanguard and it compared to a Fidelity real estate.

Yeah, I know the the the explanation I mean I was just was spot on it was analyzing two completely different.

When it had no relevance. Right. And so that's where you really got to be careful if you are using it now as a means to you know, get advice and get answers for yourself, you really you shouldn't be doing that without consulting.

Without a doubt. And I think it goes back to, you know, something I said earlier with you know, as a society, we're going more towards truths on the Internet and headlines being facts. And, you know, that's my fear with this technology. It's a very powerful it's powerful technology. It's going to allow us to scale our advice and do some amazing things over time.

Yeah, but we have to be extremely careful and, you know, gut check it. And so it's more or less that hybrid environment that you and Sasan were talking about. We're working in collaboration with this technology to give great advice and scale. Yeah.

And it's going to take time for these layers to be built on top of this large language model on top of the chatbot. So we could refine the data that we're using to create the output, the desired output. Right? Because right now, if you're just taking from all data, then it's a shot in the dark. Right? But once once this starts to evolve and data sets are pulled together to where you could have targeted data for four different aspects, then I think it can be a very impactful model.

Yeah, I agree. You know, Jamie Dimon, you know, for years was saying no Bitcoin, no Bitcoin, no Bitcoin in recent days, like, okay, bitcoin's a thing. So I don't want to get forget about bitcoin, but I don't want to get caught with a I by saying no, I know, I know. Yeah. Now that there's I so not upset Megan with the whole terminator thing.

I think you can add value to organizations and our society but we have to be very careful about, you know, how we're using it.

Yeah, I agree. And I think that if you're not thinking about it and if you're not starting to think about how it can impact your business, then you're going to fall behind. And that will be more of an impact by not showing how this can be utilized, you know, because it's going to be in every industry. It just depends on it.

Well, it is Halloween. We're likely to see a few Terminators out there tonight. Let's just hope that they're they're not part of this technology. Yeah. Mike, thank you so much.

Thanks for having me.

Disclosure: Information presented herein is for discussion and illustrative purposes only. The views and opinions expressed by the speakers are as of the date of the recording and are subject to change. These views are not intended as a recommendation to buy or sell any securities, and should not be relied on as financial, tax or legal advice. You should consult with your financial, legal, and tax professionals before implementing any transactions and/or strategies concerning your finances.