Isn’t this still subject to the same problem, where a system can lie about its inference chain by returning a plausible chain which wasn’t the actual chain used for the conclusion? (I’m thinking from the perspective of a consumer sending an API request, not the service provider directly accessing the model.)
Also:
Any time I see a highly technical post talking about AI and/or crypto, I imagine a skilled accountant living in the middle of mob territory. They may not be directly involved in any scams themselves, but they gotta know that their neighbors are crooked and a lot of their customers are gonna use their services in nefarious ways.
singletona@lemmy.world 1 week ago
Hey can someone dumb down the dumbed down explanation for me please?
fartsparkles@lemmy.world 1 week ago
AI is a magical black box that performs a bunch of actions to produce an output. We can’t trust what a developer says the black box does inside without it being completely open source (including weights).
This is a concept for a system where the actions performed can be proved to those who don’t have visibility inside the box to trust the box is doing what it is saying it’s doing.
An AI enemy that can prove it isn’t cheating by providing proof of the actions it took. In theory.
Zero Knowledge Proofs make a lot of sense for cryptography but in a more abstracted sense like this, it still relies on a lot of trust that the implementation generates proofs for all actions.
Whenever I see Web3, I personally lose any faith in whatever is being presented or proposed. To me, blockchain is an impressive solution to no real problem (except perhaps border control / customs).
AtHeartEngineer@lemmy.world 1 week ago
Zk in this context allows someone to be able to thoroughly test a model and publish the results with proof that the same model was used.
Blockchain for zk-ml is actually a great use case for 2 reasons:
kernelle@0d.gs 1 week ago
The way AI is trained today creates a black box solution, the author says only the developers of the model know what goes on inside the black box.
This is major pain point in AI, where we are trying to understand it so we can make it better and more reliable. The author mentions that unless AI companies open source their work, it’s impossible for everyone else to ‘debug’ the circuit.
Zero knowledge proofs are how they are trying to combat this, using mathematical algorithms they are trying to verify the output of an AI model in real time, without having to know the underlying intellectual property.
This could be used to train AI further and increase the reliability of AI drastically, so it could be used to make more important decisions and adhere much more easily to the strategies for which they are deployed.
singletona@lemmy.world 1 week ago
Thanks for the ‘for dummies’ explanation.