Comment on Chatbots can be manipulated through flattery and peer pressure
lakemalcom@sh.itjust.works 19 hours agoA couple of things:
- we are talking about chat bots talking to people in this post, and how you can steer the simulated conversation towards whatever you want
- it did not debug anything, a human debugged something and wrote about it. Then that human input and a ton of others were mapped into a huge probability map, and some computer simulated what people talking about this would most likely say. Is it useful? Sure, maybe. Why didn’t you debug it yourself?
nymnympseudonym@lemmy.world 9 hours ago
Fair, we need to get terms straight; this is new and unstable territory. Let’s say, LLMs specifically.
Can you explain how that is different from what a human does? I read a lot about debugging, went to classes, worked examples…
In my case this is enterprise software, many products and millions of lines of code. My test and bug-fixing teams are begging for automation. Bug fixing at scale
lakemalcom@sh.itjust.works 6 hours ago
Ok, that’s easy. If I make an LRM model of your dead grandma, is that your grandma? Why not? What’s different?
Your bug fixing teams are begging for automation. That tells me you have an unsustainable setup. You are providing a bug fix suggestion tool, I don’t see how that fixes your problem. Seems like you need better coding practices and possibly more people.
nymnympseudonym@lemmy.world 55 minutes ago