Humans overall are extremely predictable. Other factors might aggravate this, but even without any tech involved it’s not looking good.
Are humans really so predictable that algorithms can easily see thru us, or does continuous use of algorithm feeds make us predictable to their results?
Submitted 3 weeks ago by ICastFist@programming.dev to showerthoughts@lemmy.world
Comments
JASN_DE@feddit.org 3 weeks ago
GeekyOnion@lemmy.world 3 weeks ago
The proof of that fact can be found in things like the Pepys diary. Dude was stoked about his cool watch, and his dalliance with an actress.
Plesiohedron@lemmy.cafe 3 weeks ago
First : the algorithm predicts thar our behavior today will be like our behavior yesterday. Which makes sense.
Second : what you eat determines how you poop. And they do control what we eat. So that makes sense too.
So both work together.
benni@lemmy.world 3 weeks ago
The success of algorithmic feeds does not imply that humans are predictable in general. It just means that humans are predictable in terms of what content will keep them scrolling/watching/listening for some more time.
zarathustra0@lemmy.world 3 weeks ago
LLMs: high fidelity stochastic bureaucracy.
Subtly categorising people into bureaucratically compatible holes since 2021.
Zos_Kia@lemmynsfw.com 3 weeks ago
I think what’s important is to understand that these things work because they are at a certain scale. Algorithms are notoriously bad at predicting individual behaviour, hence why recommendation engines are a specialization that is far from solved. But when you have large amounts of traffic, the law of large numbers allows you to predict group behaviour with some accuracy.
So you can’t follow a user around and predict their next move and show them the right ad at the right time. But you can take 50 000 middle-aged males, and bet that at least 10 of them will buy a motorbike if you randomly show them a picture of a guy riding in the sunset. Once you have a good volume of this kind of data you can do some casino math to tilt all your bets slightly in your favour, and start betting 24/7.
It’s really cold reading, like they do in those mentalist shows. It’s a lot dumber than it looks, but it’s way more effective than you think.
Fleur_@aussie.zone 2 weeks ago
In large groups, yes. It’s just a statistics thing. For example I can’t tell if any given flipped coin will be heads or tails but I can tell you that of 100 million flipped coins about 50 million will be tails.
Drekaridill@feddit.is 3 weeks ago
The former
ArgumentativeMonotheist@lemmy.world 3 weeks ago
We are, but only the truly simple minded can be thoroughly swayed and changed into an antisocial beast of propaganda, tasked with toil and consumption. Also, there’s no need to vilify “the algorithms” or their results… there’s nothing wrong with YouTube recommending me a Japanese “Careless Whisper” cover from the 80s, based on my previous input. 😅
gandalf_der_12te@discuss.tchncs.de 3 weeks ago
oh you are so mistaken. propaganda, which is essentially advertisement for political stances, takes a toll on us all. you just don’t notice it because modern propaganda is targeted towards the subconscious more than towards the conscious, as many people have poorer defenses around their subconsciousness than around their consciousness.
gaiussabinus@lemmy.world 3 weeks ago
GIGO
AbouBenAdhem@lemmy.world 3 weeks ago
Fun fact: LLMs that strictly generate the most predictable output are seen as boring and vacuous by human readers, so designers add a bit of randomization they call “temperature”.
It’s that unpredictable element that makes LLMs seem humanlike—not the predictable element that’s just functioning as a carrier signal.
spankmonkey@lemmy.world 3 weeks ago
The unpredictable element is also why they absolutely suck at being the reliable sources of accurate information that they are being advertised to be.
Yeah, humans are wrong a lot of the time but AI forced into everything should be more reliable than the average human.
rhombus@sh.itjust.works 3 weeks ago
That’s not it. Even without any added variability they would still be wrong all the time. The issue is inherent to LLMs; they don’t actually understand your questions or even their own responses. It’s just the most probable jumble of words that would follow the question.
sxan@midwest.social 3 weeks ago
Is it? Is random variance the source of all hallucinations? I think it’s not; it’s more the fact that they don’t understand what they’re generating, they’re just looking for the most statistically probable next character.
masterspace@lemmy.ca 3 weeks ago
Things don’t have to be more reliable if they’re fast enough.
Quantum computers are inherently unreliable, but you can perform the same calculation multiple times and average the result / discard the outliers and it will still be faster than a classical computer.
EchoSnail@lemmy.zip 3 weeks ago
You just ruined the magic of ChatGpt for me lol. Fuck. I knew the illusion would break eventually but damn bro it’s fuckin 6 in the morning.
vrighter@discuss.tchncs.de 3 weeks ago
i.e. their fundamental limitations is, ironically, why they are so easy to hype