FaceDeer
@FaceDeer@fedia.io
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
- Comment on same shit every day, on god 11 hours ago:
Just pipe the electroplasma directly into the workstations. Sure, sometimes this results in dangerous overloads during adverse conditions, but that's what the Cordry rocks are for.
- Comment on Microsoft AI CEO Puzzled by People Being "Unimpressed" by AI 4 days ago:
- Comment on Google Revisits JPEG XL in Chromium After Earlier Removal 5 days ago:
It works because the .png and .jpg extensions are associated on your system with programs that, by coincidence, are also able to handle webp images and that check the binary content of the file to figure out what format they are when they're handling them.
If there's a program associated with .png on a system that doesn't know how to handle webp, or that trusts the file extension when deciding how to decode the contents of the file, it will fail on these renamed files. This isn't a reliable way to "fix" these sorts of things.
- Comment on Google Revisits JPEG XL in Chromium After Earlier Removal 5 days ago:
So it's basically "nobody wants to use it because nobody is using it."
I actually rather like it, and at this point many of the tools I use have caught up so I don't mind it any more myself.
- Comment on [deleted] 1 week ago:
It comes down to whether you can demonstrate this flaw. If you have a way to show it actually working then credentials shouldn't matter.
If your attempts at disclosure are being ignored then check:
- am I presenting this in a way that makes me seem like a deranged crazy person?
- Am I a deranged crazy person?
Try to resolve those. If the company you're trying to contact is still send your emails to the spam bin, maybe try contacting other people who have done disclosure on issues like this before. If you can convince them then they can use their own credibility to advance the issue.
If that doesn't work then I guess check the "deranged crazy person" things one more time and move on to disclosing it publicly yourself.
- Comment on [deleted] 1 week ago:
The Coordinated Vulnerability Disclosure (CVD) process:
-
Discovery: The researcher finds the problem.
-
Private Notification: The researcher contacts the vendor/owner directly and privately. No public information is released yet.
-
The Embargo Period: The researcher and vendor agree on a timeframe for the fix (industry standard is often 90 days, popularized by Google Project Zero).
-
Remediation: The vendor develops and deploys a patch.
-
Public Disclosure: Once the patch is live (or the deadline expires), the researcher publishes their findings, often assigned a CVE (Common Vulnerabilities and Exposures) ID.
-
Proof of Concept (PoC): Technical details or code showing exactly how to exploit the flaw may be released to help defenders understand the risk, usually after users have had time to patch.
You say the flaw is "fundamental", suggesting you don't think it can be patched? I guess I'd inform my investment manager during the "private notification" phase as well, then. It's possible you're wrong about its patchability, of course, so I'd recommend carrying on with CVD regardless.
-
- Comment on [deleted] 1 week ago:
I'm sure this thread will have more than just knee-jerk scary "feels" or inaccurate pop culture references in it, and we'll be able to have a nice discussion about what the technology in the linked article is actually about.
- Comment on Gmail users warned to opt out of new feature - what we know 1 week ago:
If you believe that Google's just going to brazenly lie about what they're doing, what's the point of changing the settings at all then?
In fact, Google is subject to various laws and they're subject to concerns by big corporate customers, both of which could result in big trouble if they end up flagrantly and wilfully misusing data that's supposed to be private. So yes, I would tend to believe that if the feature doesn't say the data is being used for training I tend to believe that. It at least behooves those who claim otherwise to come up with actual evidence of their claims.
- Comment on Gmail can read your emails and attachments to train its AI, unless you opt out 1 week ago:
You are being sarcastic but this is indeed the case. Especially for companies like Google, which are concerned about being sued or dumped by major corporations that very much don't want their data to be used for training without permission.
There's a bit of a free-for-all with published data these days, but private data is another matter.
- Comment on Gmail users warned to opt out of new feature - what we know 1 week ago:
Yes, they are. Not sure why you are bringing that up.
I am bringing it up because the setting Google is presenting only describes using AI on your data, not training AI on your data.
- Comment on Gmail users warned to opt out of new feature - what we know 1 week ago:
Yes, exactly. Training an AI is a completely different process from prompting it, it takes orders of magnitude more work and can't be done on a model that's currently in use.
- Comment on Gmail can read your emails and attachments to train its AI, unless you opt out 1 week ago:
I have yet to see any of these news sites show evidence that this setting is for allowing training with your data. That's not what the setting itself says, it seems like this is just a panicked ripple of clickbait titles sweeping rapidly across social media on a wave of AI dopamine.
- Comment on Gmail users warned to opt out of new feature - what we know 1 week ago:
Yes, but the point is that granting Google permission to manage your data by AI is a very different thing from training the AI on your data. You can do all the things you describe without also having the AI train on the data, indeed it's a hard bit of extra work to train the AI on the data as well.
If the setting isn't specifically saying that it's to let them train AI on your data then I'm inclined to believe that's not what it's for. They're very different processes, both technically and legally. I think there's just some click-baiting going on here with the scary "they're training on your data!" Accusation, it seems to be baseless.
- Comment on Gmail users warned to opt out of new feature - what we know 1 week ago:
Understand that basically ANYTHING that "uses AI" is using you for training data.
No, that's not necessarily the case. A lot of people don't understand how AI training and AI inference work, they are two completely separate processes. Doing one does not entail doing the other, in fact a lot of research is being done right now trying to make it possible to do both because it would be really handy to be able to do them together and it can't really be done like that yet.
And if you read any of the EULAs
Go ahead and do so, they will have separate sections specifically about the use of data for training. Data privacy is regulated by a lot of laws, even in the United States, and corporate users are extremely picky about that sort of stuff.
If the checkbox you're checking in the settings isn't explicitly saying "this is to give permission to use your data for training" then it probably isn't doing that. There might be a separate one somewhere, it might just be a blanket thing covered in the EULA, but "tricking" the user like that wouldn't make any sense. It doesn't save them any legal hassle to do it like that.
- Comment on Is there a practical reason data centers have to sprawl outward instead of upward? 1 week ago:
But then the roof has to support the entire weight of planet Earth on top of it, which is a much harder engineering challenge than pumping the electricity in the first place.
- Comment on Gmail users warned to opt out of new feature - what we know 1 week ago:
I'm not seeing where any of this gives Google permission to train AI using your data. As far as I can see it's all about using AI to manage your data, which is a completely different thing. The word "training" appears to originate in Dave Jones' tweet, not in any of the Google pages being quoted. Is there any confirmation that this is actually happening, and not just a social media panic?
- Comment on Stupid sexy raft 1 week ago:
"If we knew what we were doing it wouldn't be an experiment, would it?"
- Comment on Do you think there would eventually be technology to delete/replace memories (like the *Men In Black* device). How much do you fear such technology? (like misuse by governments/criminals) 1 week ago:
Yeah, I was going to recommend this one too. IMO one of the more realistic depictions of how memory-editing technology would work, at least in terms of what the technical requirements would be. All the inside-the-head stuff was just good cinema, not necessarily realistic.
- Comment on Robotics Company Builds Straight-Up Terminator 1 week ago:
So far. Since Terminators are capable humanoid robots and the goal is to make capable humanoid robots each improvement is going to look more like a Terminator. And also like every other capable humanoid robot from other sci-fi as well, good bad or neutral.
The only reason to leap to "OMG it's a Terminator!" Is to bait the clicks.
- Comment on Firefox is Getting a New AI Browsing Mode 1 week ago:
Firefox is open source, you can see for yourself what it's doing. You don't have to trust them.
- Comment on Robotics Company Builds Straight-Up Terminator 1 week ago:
Wheeled robots have been the norm for decades, we didn't skip that.
- Comment on Robotics Company Builds Straight-Up Terminator 1 week ago:
You have an overly broad definition of "doomsday machine."
- Comment on Robotics Company Builds Straight-Up Terminator 1 week ago:
I'd say "found the bot", but even bots are familiar with why boobs appeal to human sensibilities.
- Comment on Robotics Company Builds Straight-Up Terminator 1 week ago:
What clickbait. Apparently any vaguely capable humanoid robot is a "straight-up Terminator"?
- Comment on Firefox is Getting a New AI Browsing Mode 1 week ago:
You can have fully private AI, it can run locally on your own computer with no data leaving it. This is one of the things I'm looking forward to from Firefox AI support, since other browsers come from organizations that have their own AI they'd rather you use instead of local ones.
I have no idea how this would interfere with the extension system, or adblockers in particular. They're completely separate things.
- Comment on Firefox is Getting a New AI Browsing Mode 1 week ago:
With their user base, or with this particular social media bubble we're in right now?
- Comment on Sam Altman and husband reportedly working to genetically engineer babies from having hereditary disease 1 week ago:
The details are quite clear, even in this article. They're targeting genetic diseases.
- Comment on Sam Altman and husband reportedly working to genetically engineer babies from having hereditary disease 1 week ago:
Why is this "dystopic?" Isn't working toward eliminating diseases one of the things people keep demanding that billionaires and AI companies should be doing with their resources?
- Comment on Bangladesh court sentences ex-PM to be hanged for crimes against humanity 1 week ago:
I'd argue that even if it's magically determined with 100% accuracy, we still shouldn't give the state the power to decide to kill them. It's unnecessary. Studies have shown that the harshness of the penalty is much less of a deterrent than the certainty of being caught, putting incorrigible sociopaths away in prisons or mental institutions for life is just as effective at both deterring and protecting.
The key is having a trustworthy justice system. If you don't have a trustworthy justice system then no amount of harshness helps, in fact it only hurts.
- Comment on When "AI" content becomes indistinguishable from human-made content, is there, philosophically speaking, any meaningful differences between the two? 2 weeks ago:
No, as I said courts have been ruling the opposite. The act of training an AI is fair use. There have been cases where other acts of copyright violation may have occurred before getting to that step (for example, the download of pirated ebooks by Meta has been alleged and is going to trial) but the training itself is not a copyright violation.
You can argue about ethics separately but if you're going to invoke copyright then that's a question of law, not ethics.