Your comment is a good reason why these tools have no place in the courtroom: The things you describe as imagination.
They’re image generation tools that will generate a new, unrelated image that happens to look similar to the source image. They don’t reconstruct anything and they have no understanding of what the image contains. All they know is which color the pixels in the output might probably have given the pixels in the input.
It’s no different from giving a description of a scene to an author, asking them to come up with any event that might have happened in such a location and then trying to use the resulting short story to convict someone.
Natanael@slrpnk.net 7 months ago
There’s a lot of other layers in brains that’s missing in machine learning. These models don’t form world models and some have an understanding of facts and have no means of ensuring consistency, to start with.
rdri@lemmy.world 7 months ago
I mean if we consider just the reconstruction process used in digital photos it feels like current ai models are already very accurate and won’t be improved by much even if we made them closer to real “intelligence”.
The point is that reconstruction itself can’t produce missing details, not that a “properly intelligent” mind will be any better at it than current ai.
lightstream@lemmy.ml 7 months ago
They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.
Natanael@slrpnk.net 7 months ago
Statistical associations is not equivalent to a world model, especially because they’re neither deterministic nor even tries to prevent giving up conflicting answers. It models only use of language