Unfortunately, an LLM lies about 1 in 5 to 1 in 10 times: 80% to 90% accuracy, with a proven hard limit by OpenAI and Deepmind research papers that state even with infinite power and resources it would never approach human language accuracy. Add on top of that the fact that the model is trained on human inputs which themselves are flawed, so you multiply an average person's rate of being wrong.
In other words, you're better off browsing forums and asking people, or finding books on the subject, because the AI is full of shit and you're going to be one of those idiot sloppers everybody makes fun of, you won't know jack shit and you'll be confidently incorrect.
Icytrees@sh.itjust.works 9 hours ago
I’m seconding this and adding to it. AI is terrible for factual information but great at relative knowledge and reframing.
I use it as a starting off point in writing research when I can’t get relevant search results. Most recently, I asked it about urban legends in modern day Louisiana and got a list for more in-depth searches, most were accurate.
It’s good at mocking up accents and patterns of speech relative to a location/time as well.