Large companies probably do that anyway.
Take Blizzard for example. They just released a new patch, where class campaign quests for 8/12 classes do not work. Sure, it’s a remixed version of older expansion, and with all the phasing stuff I can kind of imagine some of the phasing issues being caused by, I don’t know, the player having a weird combination of completed stuff that’s hard to properly catch in testing, since there’s quite a lot of variables.
But the fact that one of the class quests requires crafted items to be completed, while crafting isn’t available by design in the Remix, there’s just no excuse. They either just don’t give a fuck about an issue that’s literally a progression blocker with 100% repro rate, or no one ever tested it even once.
As someone who worked in QA and gamedev, I can’t imagine how could something as obvious as this ever get approved for release. That’s something you catch immediately. Hell, you don’t even have to play through it to realize that this might be a problem.
Mikina@programming.dev 5 months ago
Square Enix actually has a pretty sick automated QA already. There’s a cool talk about how they did that for FFVII remake in GDC vault, and I highly recommend watching it, if you’re at all interested in QA.
It has nothing to do with AI, it’s just plain old automation, but they solve most of the issues you get with making automated tests in non-discrete 3D playspace and they do that in a pretty solid way. It’s definitely something I’d love to have implemented in the games I’m working on, as someone who worked in QA and now works in development. Being able to have mostly reliable way how to smoke-test levels for basic gameplay without having to torture QA to run the test-case again is good, and allows QA to focus on something else - but the tools also need oversight, so it’s not really a job lost. In summary - I think the talk is cool tech and worth the watch.
However, I don’t think AI will help in this regard, and something as unreliable and random as AI models are not a good fit for this job. You want to have deterministic testcases that you can quanitfy, and if something doesn’t match have an actual human to look at why. AI also probably won’t be able to find clever corner-cases and bugs that need human ingenuity.
Fuck AI, I kind of hope this is just a marketing talk and they are actually just improving the (deterministic) tools they already have, and they are calling it an “AI” to satisfy investors/management without actually slapping a glorified chat-bot into the tech for no reason.