
In 2016, I wrote an article on Linkedin, about the evolution of financial planning as I have experienced it over the past thirty-five years. The title of that article was “Numbers don’t lie, they’re just usually wrong” and in it, I mentioned that no financial planning algorithm had advanced enough to demonstrate empathy, and that if it ever does, I’ll know it’s time to retire.
Last week I was giving a talk on financial planning and in preparation for introducing my 2016 article, I decided to ask ChatGPT a question. Now for the uninformed, ChatGPT is an artificial intelligence language model developed by OpenAI. It is designed to understand and respond to natural language input from users. In other words, it can communicate with people in the same way as a human would. So, to prove my point that algorithms and artificial intelligence, or AI as it is more affectionately known, will never replace real human financial planners I asked it for help with this question.

I watched the blinking light waiting for its response, which for most of my questions, was immediate. This time, it blinked for nearly thirty seconds before the response came.

The words, “I..am..sorry..” hit me by surprise. I parsed each word. What did it mean by “I”? Is it self-identifying ? Was ChatGPT self-aware? Next word, “am” and when this being verb is connected with its subject “I” becomes “I am” then that opens the meta-door even further. “I am”? Like, when God told Moses, “I am that I am”. The phrase “I am” says I exist and I am transcendent, non-invented, reality. Next word, “sorry”. An emotion…possessing sorrow. Before I could wrap my head around what to make of this response, I was also grappling with my own emotional response to the response.
I knew the response was machine generated, learned mimicry of how human interaction would likely occur under this kind of painful dialogue between one human being and another. I knew that no human being was actually aware of my fictitious condition that I just revealed to it. Yet, I found myself experiencing two conflicting emotions: a slight sense of guilt, like I would feel had I lied to an actual human being; but also that emotion you feel when someone does feel sorrow because you are hurting. In other words, how you feel when someone shows empathy.
My words from my blog post from seven years prior came back to me. “Whenever algorithms can express empathy, then I’ll know it’s time to retire.” So, was it time to hang it up, or is there a deeper question to consider? Is it more important to feel empathy than to prove the validity of the empath? No one rejects the claim from pet owners that they feel empathy from their furry friends. Who hasn’t poured out their life sorrows to a pet and not felt genuine empathy from them. Who cares if science tells us they are not emotionally complex enough to synthesize a human being’s emotions and generate an empathic response? Why tell pet owners they only think their pets show empathy, but it’s not true empathy? Was hearing the words “I am sorry” that could have come from a human soul, enough for me to feel empathy?
Maybe artificial empathy is real empathy if it feels like it.
Recent Comments