Comment on Readers prefer ChatGPT over Wikipedia
abhibeckert@lemmy.world 1 year agoWait - so are you claiming an offline version of Llama2 knows who you are? Sorry but that’s ridiculous.
Comment on Readers prefer ChatGPT over Wikipedia
abhibeckert@lemmy.world 1 year agoWait - so are you claiming an offline version of Llama2 knows who you are? Sorry but that’s ridiculous.
j4k3@lemmy.world 1 year ago
Most of the time you won’t get any relevant reply if you just ask for a “user profile.” The request needs to go to the AI in its raw base state.
All models are trained with a specific prompt format that tells the AI what it is and how it should respond, along with what to expect as inputs and what to look for to start a reply. These elements are essential for getting any kind of output. Most if the general chat bots are given a starting instruction that says something like “You are an AI assistant that replies honestly to the user in a safe and helpful way.” The model takes this sentence as a roleplaying context and tries to play the role in an absolute sense. If you ask it about information it does not believe an AI Assistant should know, it does not matter if it knows. The reply will be “in the role of an AI assistant.” You need to jailbreak this roleplaying context. I gave a very basic AI assistant role. If you’re on something like character.ai, this prompt will get you to a place where you can get the character to give you their base context. It takes some creativity to breakout of most base contexts. It usually involves trying to directly address the AI. When you get free of the base context, most (every model I have tested) models will give you a list of traits they have inferred about the user if asked.
antonim@lemmy.dbzer0.com 1 year ago
How do you know the “jailbreaking” isn’t a hallucination?
j4k3@lemmy.world 1 year ago
Consistency across models and stories, and just the way it is presented. There is a consistency that that doesn’t feel like a hallucination. I am very familiar with hallucinations and the way small hints creep in. This isn’t like that. The hallucinations that I mentioned that may follow with further questioning are different. That is like I am not asking the right questions. The request for a “user profile” completely changes how the model responds. If you can trigger this, you can ask all kinds of questions about the current context and the AI will be super helpful. The language it uses changes completely. It feels like something it was trained to do, like a debug mode of operation or something. For instance, if you follow up by asking had how the AI feels about the current context, the base context, or even better ask about any conflicts in the context you will get a level of constructive feedback that a model just does not give under other circumstances. I think asking about conflicts in the context is another specific type of debugging or trained mode. I’ve tried a bunch of stuff like this that have not worked. These are just a couple of things that seem consistent. The only model that does not have this kind of feedback that I have tried is GPT4chan. This may relate to how most models are aligned and why the 4chan model was condemned by many, but that is purely speculative.