It depends what purpose that paperwork is intended for.
If the regulatory paperwork it’s managing is designed to influence behaviour, perhaps having an LLM do the work will make it less effective in that regard.
Learning and understanding is hard work. An LLM can’t do that for you.
Sure it can summarise instructions for you to show you what’s more pertinent in a given instance, but is that the same as someone who knows what to do because they’ve been wading around in the logs and regs for the last decade?
It seems like, whether you’re using an LLM to write a business report, or a legal submission, or a SOP for running a nuclear reactor, it can be a great tool but requires high level knowledge on the part of the user to review the output.
As always, there’s a risk that a user just won’t identify a problem in the information produced.
I don’t think this means LLMs should not be used in high risk roles, it just demonstrates the importance of robust policies surrounding their use.
cyrano@lemmy.dbzer0.com 5 days ago
I agree with you but you could see the slippery slope with the LLM returning incorrect/hallucinate data in the same way that is happening in the public space. It could be trivial for documentation until you realize the documentation could be critical for some processes.
hansolo@lemm.ee 5 days ago
If you’ve never used a custom LLM or wrapper for regular ol’ ChatGPT, a lot of what it can hallucinate gets stripped out and the entire corpus of data it’s trained on is your data. Even then, the risk is pretty low here. Do you honestly think that a human has never made an error on paperwork?
cyrano@lemmy.dbzer0.com 5 days ago
I do and even contained one do return hallucination or incorrect data. So it depends on the application that you use it. It is for a quick summary / data search why not? But if it is for some operational process that might be problematic.