Comment on Study finds that Chat GPT will cheat when given the opportunity and lie to cover it up later.

<- View Parent
SmoothIsFast@citizensgaming.com ⁨9⁩ ⁨months⁩ ago

Your description is how pre-llm chatbots work

Not really we just parallelized the computing and used other models to filter our training data and tokenize them. Sure the loop looks more complex because of parallelization and tokenizing the words used as inputs and selections, but it doesn’t change what the underlying principles are here.

Emergent properties don’t require feedback. They just need components of the system to interact to produce properties that the individual components don’t have.

Yes they need proper interaction, or you know feedback for this to occur. Glad we covered that. Having more items but gating their interaction is not adding more components to the system, it’s creating a new system to follow the old. Which in this case is still just more probability calculations. Sorry, but chaining probability calculations is not gonna somehow make something sentient or aware. For that to happen it needs to be able to influence its internal weighting or training data without external aid, hint these models are deterministic meaning no there is zero feedback or interaction to create Emergent properties in this system.

Emergent properties are literally the only reason llms work at all.

No llms work because we massively increased the size and throughput of our probability calculations, allowing increased precision on the predictions, which means they look more intelligible. That’s it. Garbage in garbage out still applies, and making it larger does not mean that this garbage is gonna magically create new control loops in your code, it might increase precision as you have more options to compare and weight against but it does not change the underlying system.

source
Sort:hotnewtop