My god, that’s a lot to process. A couple that stand out:
Comments proposing to use github as the database backup. This is Keyword Architecture, and these people deserve everything they get.
The Replit model can also send out communications? It’s just a matter of time before some senior exec dies on the job but nobody notices because their personal LLM keeps emailing reports that nobody reads.
balder1991@lemmy.world 1 day ago
All I see is people chatting with an LLM as if it was a person. “How catastrophic from 0 to 100?”, you’re just tweeting to get some random answer based solely on whatever context is being fed in the input and that you probably don’t know the extent of it.
Trying to make the LLM “see its mistakes” is a pointless exercise.
cyrano@lemmy.dbzer0.com 1 day ago
Yeah the interaction are pure waste of time I agree, make it write an apology letter? WTF! For me it looks like a fast track way to learn environment segregation, & secret segregation. Data is lost, learn from it and there are tool already in place like git like alembic for proper development.
UntitledQuitting@reddthat.com 1 day ago
the apology letter(s) is what made me think this was satire. using shame to punish “him” like a child is an interesting troubleshooting method.
the lying robot hasn’t heel-turned, any truth you’ve gleaned has been accidental.
cyrano@lemmy.dbzer0.com 1 day ago
It doesn’t look like satire unfortunately
andallthat@lemmy.world 1 day ago
I wonder if it can be used legally against the company behind the model, though. I doubt that it’s possible, but having a “your own model says it effed up my data” could give some beef to a complaint. Or at least to a request to get a refund on the fees.
6nk06@sh.itjust.works 22 hours ago
How bad is this on a scale of sad emoji to eggplant emoji.
Children are replacing us, it’s terrifying.