Comment on Air Canada must pay damages after chatbot lies to grieving passenger about discount | Airline tried arguing virtual assistant was solely responsible for its own actions

<- View Parent
tiramichu@lemm.ee ⁨8⁩ ⁨months⁩ ago

I agree that’s 100% what happened in this specific case. The customer had absolutely no reason to suspect the information they were given was bad, and the airline should have honoured the deal.

A top-level comment on the post was also mine, by the way, in which I expressed the same and said “Shame on Air Canada for even fighting it.”

Air Canada were completely and utterly wrong in this case, but I haven’t been talking about this case.

All of my comments in this chain have been trying to discuss what determines, in the general case, which party is in the right when things like this happen.

If it seemed I was arguing on this specific case then apologies for the confusion. It’s no surprise people would be so intensely against me if it seemed that way.

There are cases like this Air Canada one where the customer is obviously in the right. We can also imagine hypothetical cases where I personally believe the customer would be in the wrong - for example if the customer intentionally exploit a flaw in the system to game a $1 flight - which is again obviously not what happened here, it’s just an example for that sake of argument.

My fundamental point at the start of this comment chain was that I don’t actually think we need any new mechanism to determine who is right, because the existing mechanisms we already have in place to determine who is right between a company and a customer all still apply and work just the same regardless of whether it is AI or not AI.

And the mechanism is, fundamentally, that the customer should always be considered right as long as they have acted in good faith.

That’s why I’m very pleased with the ruling that Air Canada were wrong here and they can’t cop out of their responsibilities by blaming the AI.

source
Sort:hotnewtop