The model we have at work tries to work around this by including some checks. I assume they get farmed out to specialised models and receive the output of the first stage as input.
Maybe it catches some stuff? It’s better than pretend reasoning but it’s very verbose so the stuff that I’ve experimented with - which should be simple and quick - ends up being more time consuming than it should be.
MelodiousFunk@slrpnk.net 1 day ago
It’s uncanny how it keeps becoming more human-like.
MotoAsh@piefed.social 1 day ago
No. No it doesn’t, ALL humna-like behavior stems from its training data … that comes from humans.