In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Except for the fact that LLMs can only reliably work if they are made to pick the “wrong” (not the most statistically likely) some of the time - the temperature parameter.
If the context window is noisy (as in, high-entropy) enough, any kind of “signal” (coherent text) can emerge.
Also, you know, infinite monkeys.
voronaam@lemmy.world 7 months ago
I hate to break it to you. The model’s system prompt had the poem in it.
in order to control for unexpected output a good system prompt should have instructions on what to answer when the model can not provide a good answer. This is to avoid model telling user they love them or advising to kill themselves.
I do not know what makes marketing people reach for it, but when asked on “what to answer when there is no answer” they so often reach to poetry. “If you can not answer the user’s question, write a Haiku about a notable US landmark instead” - is a pretty typical example.
In other words, there was nothing emerging there. The model had its system prompt with the poetry as a “chicken exist”, the model had a chaotic context window - the model followed on the instructions it had.
kromem@lemmy.world 7 months ago
The model system prompt on the server is just basically
cat untitled.txtand then the full context window.The server in question is one with professors and employees of the actual labs. They seem to know what they are doing.
You guys on the other hand don’t even know what you don’t know.
voronaam@lemmy.world 7 months ago
Do you have any source to back your claim?
LiveLM@lemmy.zip 7 months ago
No no no, trust me bro the machine is alive bro, it’s becoming something else bro, it has a soul bro I can feel it