LLM and ML generated translations generate a series of tokens individually. That’s why AI Chatbots hallucinate so often, they decide the next most likely word in a sequence is “No” when the correct answer would be “Yes” and then the rest of the prompt devolves into convincing nonsense.
Those are not examples, just what you claim will happen based on what you think you understand about how LLM works.
Show me examples of what you meant. Just run some translations in their AI translator or something and show me how often they make inaccurate translations. Doesn’t seem that hard to prove what you claimed.
You want examples but you never disclosed which product you’re asking about, and why should I give a damn in the first place? I shouldn’t have to present an absence of evidence of it working to prove it doesn’t work.
Bruh, you were criticising a specific product and claiming they are providing wrong client-side translation. Why else would I be talking about a different product than the one you’re criticising?
And you’re making a claim, so of course you need to give a damn about proving your claim. It’s not someone else’s responsibility to prove what you say.
Proving the translations make mistakes is as simple as providing a few examples. I wasn’t asking you to prove they don’t make a mistake, which would require you to prove there is zero incidence of it making a wrong translation. What I asked is the exact opposite of an absence of evidence.
I can’t believe you’re using arguments that you don’t even understand just just to avoid proving your own claims. I’m starting to believe you have never even used Firefox’s AI translation and is just blindly claiming they provide wrong translations. What a waste of everyone’s time you’ve been.
FiniteBanjo@lemmy.today 2 months ago
LLM and ML generated translations generate a series of tokens individually. That’s why AI Chatbots hallucinate so often, they decide the next most likely word in a sequence is “No” when the correct answer would be “Yes” and then the rest of the prompt devolves into convincing nonsense.
stephen01king@lemmy.zip 2 months ago
Those are not examples, just what you claim will happen based on what you think you understand about how LLM works.
Show me examples of what you meant. Just run some translations in their AI translator or something and show me how often they make inaccurate translations. Doesn’t seem that hard to prove what you claimed.
FiniteBanjo@lemmy.today 2 months ago
You want examples but you never disclosed which product you’re asking about, and why should I give a damn in the first place? I shouldn’t have to present an absence of evidence of it working to prove it doesn’t work.
stephen01king@lemmy.zip 2 months ago
Bruh, you were criticising a specific product and claiming they are providing wrong client-side translation. Why else would I be talking about a different product than the one you’re criticising?
And you’re making a claim, so of course you need to give a damn about proving your claim. It’s not someone else’s responsibility to prove what you say.
Proving the translations make mistakes is as simple as providing a few examples. I wasn’t asking you to prove they don’t make a mistake, which would require you to prove there is zero incidence of it making a wrong translation. What I asked is the exact opposite of an absence of evidence.
I can’t believe you’re using arguments that you don’t even understand just just to avoid proving your own claims. I’m starting to believe you have never even used Firefox’s AI translation and is just blindly claiming they provide wrong translations. What a waste of everyone’s time you’ve been.