Comment on When A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

BetaDoggo_@lemmy.world ⁨2⁩ ⁨months⁩ ago

How many times is this same article going to be written? Model collapse from synthetic data is not a concern at any scale when human data is in the mix. We have entire series of models now trained with mostly synthetic data: huggingface.co/docs/transformers/main/…/phi3. When using entirely unassisted outputs error accumulates with each generation but this isn’t a concern in any real scenarios.

source
Sort:hotnewtop