postscarce
@postscarce@lemmy.dbzer0.com
- Comment on AI 2027 1 week ago:
What’s interesting, to me, is that’s exactly how people hedge in the fringe UFO community too.
Ha! True. Very true. I find this scenario compelling but it’s based on a series of assumptions which individually seem plausible but I have no way to evaluate them all together. It’s like the Drake Equation; because the probabilities are multiplicative even tiny adjustments to a few of them end up making a huge difference to the final answer.
The thing is though, if there really is even a tiny chance of the ultimate outcome of this thought experiment being true (i.e. the end of humanity) then we should probably address it. And what that would look like is stopping the AI companies from doing any more research until they can prove their model will be safe, which should make people who are more concerned about AI slop happy too. Everybody wins by hitting the brakes.
- Comment on AI 2027 1 week ago:
It’s not meant to be a specific prediction, it’s just a plausible (for when it was written) scenario. Don’t worry about the actual years, it could be off by an order of magnitude, just decide for yourself if any of the assumptions are completely wrong.
- Comment on Microsoft’s $440 billion wipeout, and investors angry about OpenAI’s debt, explained 1 month ago:
LLMs could theoretically give a game a lot more flexibility, by responding dynamically to player actions and creating custom dialogue, etc. but, as you say, it would work best as a module within an existing framework.
I bet some of the big game dev companies are already experimenting with this, and in a few years (maybe a decade considering how long it takes to develop a AAA title these days) we will see RPGs with NPCs you can actually chat with, which remain in-character, and respond to what you do. Of course that would probably mean API calls to the publisher’s server where the custom models are run, with all of the downsides that entails.