Comment on Major shifts at OpenAI spark skepticism about impending AGI timelines
MentalEdge@sopuli.xyz 3 months agoIn LLM we simulate the chemical properties of the neurones using math.
No, we don’t. A machine learning node accepts inputs, which it processes into one or multiple outputs. But literally no part of how the actual neuron functions is based on or limited to what we THINK human neurons do.
And we have already prototype of chips that work with lab grown brain tissue that show very efficient training capabilities in machine learning (it already plays pong)
Using actual biological neurons for computing is a completely separate field of study with almost no overlap with machine learning.
Stop pulling shit out your ass.
Petter1@lemm.ee 3 months ago
Well😆that made me laugh, sorry
MentalEdge@sopuli.xyz 3 months ago
Chips with actual biological neurons are in no way equivalent to the neural networks constructed for machine learning applications.
Do not confuse the two.
Petter1@lemm.ee 3 months ago
So how are they different, since you seem to know…
MentalEdge@sopuli.xyz 3 months ago
Are you serious? Start looking this stuff up instead of smugly acting like you can’t possibly have guessed wrong.
One is literal living neurons, activated and read by electrodes. What exactly happens in the neurons is a complete mystery. I don’t know, because NO ONE KNOWS. Neurons use so much more than simple on/off states, sending different electric and chemical signals with different lag-times with who-knows-what signaling purpose. Their structure is completely random with connections going around with seemingly no rime or reason, and we certainly don’t control how exactly they grow.
Machine learning neurons are literally just arbitrary input-output nodes. How exactly they accept input and transform it can be coded to work however you like. And is. They don’t simulate shit because we don’t know exactly how biological neurons work. We run them using parallel processors like GPUs, but that still doesn’t let us do something like whatever neurotransmitters do in a brain.
Additionally, they get arranged in sequential arrays of layers, where the overrall structure of the model is pre-determined in order to optimize for a given task before it even starts training. Brains don’t do that. They just work. Somehow. The inter-connections in a brain are orders of magnitude more complex and they form on their own.
The way they learn is completely different. Machine learning models are trained by “evolving” them, creating a thousand mutations, then seeing which one works best, then repeating that for that model while deleting the rest.
With neurons, it just works. You don’t need a million iterations to get ONE that work. And we don’t know HOW brains do that.
Also, fuck you, I’m blocking you now. Go learn this stuff properly before you open your mouth again. You’re a misinformed fool. Stop being one.