You’re absolutely right and I’m aware that the traceability was already a given. The collected data also was already there and has been for years.
The AI in this case added basically a simple “human language” interface and some automation. The multi agent approach was basically a “simple” way to integrate multiple systems. You could’ve done the same before but with more manual steps or fully automated but then more rigid and less flexible. E.g. with this system you could ask follow up questions etc.
IMALlama@lemmy.world 1 day ago
Ah, I see. It’s very true that a lot of plants have… older software setups that likely require a bit more of a human touch than should be necessary. I don’t work in a plant, but that’s basically been my career arc - “the poor humans have to hop between how many disconnected systems to accomplish what now? Let’s write some better software to address that.”
Using AI as a replacement to human glue seems reasonable if you have decent data to traverse. The “data” at my employer is often bespoke to each system, which results in a lot of gray matter mapping names and attributes across systems. Our IT org is working on rolling out glean, but so far it’s basically a better internal search than offering real insights.
affenlehrer@feddit.org 1 day ago
Quite interesting. I worked as a software engineer in different areas but not in data sciences. The demo I described was part of a bigger effort to bring visibility to AI related use cases. This one was a relatively good fit in my opinion, as long the data is mostly handled outside of the LLMs and they just “translate” to human language and glue things together. Otherwise I’d fear hallucinations and data not fitting into the context window.