Comment on Somebody managed to coax the Gab AI chatbot to reveal its prompt

<- View Parent
kromem@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

Yeah. So with the pretrained models they aren’t instruct tuned so instead of “write an ad for a Coca Cola Twitter post emphasizing the brand focus of ‘enjoy life’” you need to do things that will work for autocompletion like:

As an example of our top shelf social media copywriting services, consider the following Cleo winning tweet for the client Coca-Cola which emphasized their brand focus of “enjoy life”:

In terms of the pre- and post-processing, you can use cheaper and faster models to just convert a query or response from formatting for the pretrained model into one that is more chat/instruct formatted. You can also check for and filter out jailbreaking or inappropriate content at those layers too.

Basically the pretrained models are just much better at being more ‘human’ and unless what you are getting them to do is to complete word problems or the exact things models are optimized around currently (which I think poorly map to real world use cases), for a like to like model I prefer the pretrained.

Though ultimately the biggest advantage is the overall model sophistication - a pretrained simpler and older model isn’t better than a chat/instruct tuned more modern larger model.

source
Sort:hotnewtop