Comment on Hardening Firefox with Anthropic’s Red Team
lIlIlIlIlIlIl@lemmy.world 1 week agoHallucinated? From researched and documented code spelunking?
Comment on Hardening Firefox with Anthropic’s Red Team
lIlIlIlIlIlIl@lemmy.world 1 week agoHallucinated? From researched and documented code spelunking?
PabloSexcrowbar@piefed.social 1 week ago
That’s…exactly my point though…
lIlIlIlIlIlIl@lemmy.world 1 week ago
What is?
PabloSexcrowbar@piefed.social 1 week ago
That even though the team is using AI to check for vulnerabilities, they’re trained and know when their AI is hallucinating and when it’s not.
lIlIlIlIlIlIl@lemmy.world 1 week ago
I guess I’m not sure how hallucinating and reading from source code are overlapping. Do you think these models are just barfing back garbage nonsense?