I think there’s good potential where the caller needs information.
But I am skeptical for problem-solving, especially where it requires process deviations. Like last week, I had an issue where a service I signed up for inexplicably set the start date incorrectly. It seems the application does not allow the user to change start dates themselves within a certain window. So, I went to support, and wasted my time with the AI bot until it would pass me off to a human. The human solved the problem in five seconds because they’re allowed to manually change it on their end and just did that.
Clearly the people who designed the software and the process did not foresee this issue, but someone understood their own limitations enough to give support personnel access to perform manual updates. I worry companies will not want to give AI agents the same capabilities, fearing users can talk their AI agent into giving them free service or something.
MangoCats@feddit.it 1 week ago
It’s easy to get above rock bottom. Today’s voice menus are already openly abusive of the customers.
Oh, demoralizing thought, when the AI call center agent becomes intentionally abusive… and don’t think that companies, and especially government agencies, won’t do that on purpose.
I have actually had semi-positive experiences with AI chat bot front ends, they’re less afraid to refer to an actual human being who might know something as opposed to the call center front line humans who seem to be afraid they might lose their job if they admit the truth: that they have absolutely no clue how to help you.
Shifting the balance, drop the number of virtually untrained humans in the system by half, train the remaining ones twice as much, and let AI fill in for routing you to a hopefully appropriate “specialist.”