Comment on AI chatbots unable to accurately summarise news, BBC finds
rottingleaf@lemmy.world 1 month agoQualitatively.
Comment on AI chatbots unable to accurately summarise news, BBC finds
rottingleaf@lemmy.world 1 month agoQualitatively.
maniclucky@lemmy.world 1 month ago
That response doesn’t make sense. Please clarify.
rottingleaf@lemmy.world 1 month ago
A human can move, a car can move. a human can’t move with such speed, a car can. The former is qualitative difference how I meant it, the latter quantitative.
Anyway, that’s how I used those words.
maniclucky@lemmy.world 1 month ago
Ooooooh. Ok that makes sense. Correct use of words, just was not connecting those dots.
rottingleaf@lemmy.world 1 month ago
Yes.
That’s fundamentally solvable.
I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.
What all these companies like DeepSeek and OpenAI and others are doing lately, with some “chain-of-thought” model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they’ve invested so much data is a minor part which doesn’t have to be so powerful.