Boddhisatva
@Boddhisatva@lemmy.world
- Comment on [deleted] 4 days ago:
Warning: Potential Security Risk Ahead
Firefox detected a potential security threat and did not continue to www.azaz.com. If you visit this site, attackers could try to steal information like your passwords, emails, or credit card details.
www.azaz.com uses an invalid security certificate.
The certificate is not trusted because it is self-signed.
Error code: MOZILLA_PKIX_ERROR_SELF_SIGNED_CERT
Think I’ll pass on this site.
- Comment on Why is it ok to replace -ed at the end of a word with -t in some cases? For example, why are "vexed" and "vext" both acceptable, but "thrilled" and "thrilt" aren't? 6 days ago:
Those ending in a ‘-t’ are archaic forms left over from Middle English.
- Comment on Self-Driving Tesla Fails School Bus Test, Hitting Child-Size Dummies… Meanwhile, Robo-Taxis Hit the Road in 2 Weeks. 1 week ago:
Texas state legislature has passed a law making it illegal for cities to pass laws more restrictive than the state laws and Austin which is known to be full of progressives. This makes it a perfect place for Tesla to beta-test it’s software. They’ll kill people likely to vote for Democrats.
- Comment on [deleted] 1 week ago:
You can learn to consciously control a lot of things that various ‘lie detectors’ monitor. I took a stress management/biofeedback class in college where we learned to raise and lower galvanic skin response, heart rate, and blood pressure. It was a fun class, and in learning to control them, you can also reduce the chance of getting a false positive by keeping any of those variables from drifting to far from the expected range.
- Comment on [deleted] 1 week ago:
“There’s no unique physiological sign of deception. And there’s no evidence whatsoever that the things the polygraph measures — heart rate, blood pressure, sweating, and breathing — are linked to whether you’re telling the truth or not,” says Leonard Saxe, a psychologist at Brandeis University who’s conducted research into polygraphs. In an exhaustive report, the National Research Council concluded, “Almost a century of research in scientific psychology and physiology provides little basis for the expectation that a polygraph test could have extremely high accuracy.”
The real question is, why do people think that they work? Why do government agencies use them to grant clearances when there is no evidence that they can reliably detect falsehoods and ample evidence that they are known to give false positives when people are actually telling the truth?
Go take some classes on stress management and biofeedback and learn to control all those things they are testing for. Then you won’t need to worry about what the questioners mean when they ask you something.
- Comment on YSK: Two oil brothers, Charles Koch and David Koch, attempted to purchase the entire United States Congress 2 weeks ago:
They come across as psychopaths…
Yep. Studies have shown that corporate CEOs have a much higher chance of having psychopathic traits than the general population.
- Comment on Can you put a ship inside a Klein bottle? 2 weeks ago:
Either that or everything is in every Klein bottle.
That would still be a no because no ship can be put in a Klein bottle if every ship is already in the Klein bottle.
- Comment on SAG-AFTRA Files Unfair Labor Practice Complaint Against Epic Games Due To A.I. Darth Vader 2 weeks ago:
A little more than that, actually.
The company says Llama Productions chose to replace human performers’ work with AI technology but did so “without providing any notice of their intent to do this and without bargaining with us over appropriate terms.” As such, SAG-AFTRA has filed an unfair labor practice complaint against the company with the NLRB.
- Comment on Opening my eyes slightly more evokes an emotional response. 2 weeks ago:
Look up Facial-Feedback Theory. Studies have shown that manipulating your expressions can modulate your emotions.
- Comment on [deleted] 3 weeks ago:
Fair point but I’m not sure that naming every permutation is possible. We might be better off trying to make do with charts or something.
- Comment on [deleted] 3 weeks ago:
I think trying to define it is fairly pointless. We love what we love and we lust what we lust. Rather than defining it, I wish we could all just accept that and stop hating people for having different preferences.
- Comment on Netflix will show generative AI ads midway through streams in 2026 3 weeks ago:
Sorry, misunderstood.
- Comment on Netflix will show generative AI ads midway through streams in 2026 3 weeks ago:
…it somehow dilutes the argument against AI ads.
I didn’t think it diluted the arguement. They were just disagreeing with the prior poster. At the end, they even state:
But AI ads will make me never go back.
- Comment on It would be fire if Anonymous hacked ICE 4 weeks ago:
So ignorance is bliss?
- Comment on CrowdStrike Announces Layoffs Affecting 500 Employees 4 weeks ago:
You don’t need $10 billion in revenue. You could just coast along and only hit, what, $9.8 billion? And then you wouldn’t have to ruin 500 people’s lives. I’m betting the CEO has a bonus scheduled if he hits this goal.
- Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 4 weeks ago:
Yikes!
- Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies 4 weeks ago:
In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”
This is a rather terrifying take. Particularly when combined with the earlier passage about the man who claimed that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.” Therapists have to be very careful because human memory is very plastic. It’s very easy to alter a memory, in fact, every time you remember something, you alter it just a little bit. Under questioning by an authority figure, such as a therapist or a policeman if you were a witness to a crime, these alterations can be dramatic. This was a really big problem in the '80s and '90s.
Elizabeth Loftus: Oh gee, well in the 1990s and even in maybe the late 80s we began to see an altogether more extreme kind of memory problem. Some patients were going into therapy maybe they had anxiety, or maybe they had an eating disorder, maybe they were depressed, and they would end up with a therapist who said something like well many people I’ve seen with your symptoms were sexually abused as a child. And they would begin these activities that would lead these patients to start to think they remembered years of brutalization that they had allegedly banished into the unconscious until this therapy made them aware of it. And in many instances these people sued their parents or got their former neighbors or doctors or teachers whatever prosecuted based on these claims of repressed memory. So the wars were really about whether people can take years of brutalization, banish it into the unconscious, be completely unaware that these things happen and then reliably recover all this information later, and that was what was so controversial and disputed.
Kaitlin Luna: And your work essentially refuted that, that it’s not necessarily possible or maybe brought up to light that this isn’t so.
Elizabeth Loftus: My work actually provided an alternative explanation. Where could these merit reports be coming from if this didn’t happen? So my work showed that you could plant very rich, detailed false memories in the minds of people. It didn’t mean that repressed memories did not exist, and repressed memories could still exist and false memories could still exist. But there really wasn’t any strong credible scientific support for this idea of massive repression, and yet so many families were destroyed by this, what I would say unsupported, claim.
The idea that ChatBots are not only capable of this, but that they are currently manipulating people into believing they have recovered repressed memories of brutalization is actually at least as terrifying to me as it convincing people that they are holy prophets.
- Comment on An LLM would probably run the USA better 2 months ago:
It probably couldn’t do much worse…
- Comment on You could probably measure someone's age how hanging his balls is. 7 months ago:
Collagen, huh? Frankly I’m tired of knowing how cold the water in the toilet bowl is.