Comment on Gmail users warned to opt out of new feature - what we know
NuXCOM_90Percent@lemmy.zip 1 week agoUnderstand that basically ANYTHING that “uses AI” is using you for training data.
At its simplest, it is the old fashioned A/B testing where you are used as part of a reinforcement/labeling pipeline. Sometimes it gets considerably more bullshit as your very queries and what would make you make them are used to “give you a better experience” and so forth.
And if you read any of the EULAs (for the stuff that google opted users into…) you’ll see verbiage along those lines.
Of course, the reality is that google is going to train off our data regardless. But that is why it is a good idea to decouple your life from google as much as possible. It takes a long ass time but… no better time than today.
FaceDeer@fedia.io 1 week ago
No, that's not necessarily the case. A lot of people don't understand how AI training and AI inference work, they are two completely separate processes. Doing one does not entail doing the other, in fact a lot of research is being done right now trying to make it possible to do both because it would be really handy to be able to do them together and it can't really be done like that yet.
Go ahead and do so, they will have separate sections specifically about the use of data for training. Data privacy is regulated by a lot of laws, even in the United States, and corporate users are extremely picky about that sort of stuff.
If the checkbox you're checking in the settings isn't explicitly saying "this is to give permission to use your data for training" then it probably isn't doing that. There might be a separate one somewhere, it might just be a blanket thing covered in the EULA, but "tricking" the user like that wouldn't make any sense. It doesn't save them any legal hassle to do it like that.
NuXCOM_90Percent@lemmy.zip 1 week ago
Yes, they are. Not sure why you are bringing that up.
Because a huge part of making a good model is providing good data. That is, generally speaking, done by labeling things ahead of time. Back in the day it was paying people to take an amazon survey where they said “hot dog or no hot dog”. These days… it is “anti-bot” technology that gets that for free
But that is ALSO just simple metrics like “Did the user use what we suggested”. Instead of saying “not hot dog” it is “good reply” or “no reply” or “still read email” or “ignored email” and so forth.
That is literally just a feedback loop and is core to pretty much any “agentic” network/graph.
There also tend to be laws about opting in and forced EULA agreements. It is almost like the megacorps have acknowledged that they’ll just do whatever and MAYBE pay a fee after they have made so much more money already.
FaceDeer@fedia.io 1 week ago
I am bringing it up because the setting Google is presenting only describes using AI on your data, not training AI on your data.