intellectually satisfying
Pretty sure that’s a sin.
Comment on Somebody managed to coax the Gab AI chatbot to reveal its prompt
paysrenttobirds@sh.itjust.works 6 months agoI agree with you, but I also think this bot was never going to insert itself into any real discussion. The repeated requests for direct, absolute, concise answers that never go into any detail or have any caveats or even suggest that complexity may exist show that it’s purpose is to be a religious catechism for Maga. It’s meant to affirm believers without bothering about support or persuasion.
Even for someone who doesn’t know about this instruction and believes the robot agrees with them on the basis of its unbiased knowledge, how can this experience be intellectually satisfying, or useful, when the robot is not allowed to display any critical reasoning? It’s just a string of prayer beads.
intellectually satisfying
Pretty sure that’s a sin.
I don’t see the use for this thing either. The thing I get most out of LLMs is them attacking my ideas. If I come up with something I want to see the problems beforehand. If I wanted something to just repeat back my views I could just type up a document on my views and read it. What’s the point of this thing? It’s a parrot but less effective.
ridethisbike@lemmy.world 6 months ago
You’re joking, right? You realize the group of people you’re talking about, yea? This bot 110% would be used to further their agenda. Real discussion isn’t their goal and it never has been.