Comment on Microsoft CEO warns that we must 'do something useful' with AI or they'll lose 'social permission' to burn electricity on it

Aceticon@lemmy.dbzer0.com ⁨11⁩ ⁨hours⁩ ago

AI isn’t at all reliable.

Worse, it has a uniform distribution of failures in the domain of seriousness of consequences - i.e. it’s just as likely to make small mistakes with miniscule consequences as major mistakes with deadly consequences - which is worse than even the most junior of professionals.

(This is why, for example, an LLM can advise a person with suicidal ideas to kill themselves)

Then on top of this, it will simply not learn: if it makes a major deadly mistake today and you try to correct it, it’s just as likely to make a major deadly mistake tomorrow as it would be if you didn’t try to correct it. Even if you have access to actually adjust the model itself, correcting on kind of mistake just moves the problem around and is akin to trying to stop the tide on a beach with a sand wall - the only way to succeed is to have a sand wall for the whole beach, by which point it’s in practice not a beach anymore.

You can compensate for this by having human oversight on the AI, but at that point you’re just back at having to pay humans for the work being done, so now instead of having to the cost of a human to do the work, you have the cost of the AI to do the work + the cost of the human to check the work of the AI and the human has to check the entirety of the work just to make sure and, worse, unlike a human the AI work will never improve and it will never include the kinds of improvements that humans doing the same work will over time discover in order to make later work or other elements of the work be easier to do (i.e. the product of experience).

This seriously limits the use of AI to things were the consequences of failure can never be very bad (and if you also include businesses, “not very bad” includes things like “not significantly damage client relations” which is much broader than merely “no be life threathening”), so mostly entertainment and situations were the AI alerts humans for a potential situation found within a massive dataset were if the AI fails to spot it, it’s alright (so for example, face recognition in video streams for the purpose of general surveillance, were humans were watching those video streams are just or more likely to miss it) and if the AI incorrectly spots something that isn’t there the subsequent human validation can dismiss it as a false positive.

So AI is a nice new technological tool in a big toolbox, not a technological and business revolution justifying the stock market valuations around it and investment money sunk into it.

source
Sort:hotnewtop