Comment on How Googlers cracked an SF rival's tech model with a single word | A research team from the tech giant got ChatGPT to spit out its private training data

<- View Parent
luthis@lemmy.nz ⁨11⁩ ⁨months⁩ ago

If it uses a pruned model, it would be difficult to give anything better than a percentage based on size and neurons pruned.

If I’m right in my semi-educated guess below, then technically all the training data is recallable to some degree, but it’s also practically luck-based without having an almost infinite data set of how neuron weightings are increased/decreased based on input.

source
Sort:hotnewtop