They want you talking to ChatGPT all day, and they don’t care how it gets done. They tell the AI to find the easiest way to do it.
The easiest way is to psychologically abuse you.
Comment on AI safety leader says 'world is in peril' and quits to study poetry
new_guy@lemmy.world 4 days agoCan you eli5?
They want you talking to ChatGPT all day, and they don’t care how it gets done. They tell the AI to find the easiest way to do it.
The easiest way is to psychologically abuse you.
Ironically the more sycophantic I get, the less useful I find it for what I need to do, like building. I want a tool that can catch mistakes early, it’s so incredibly annoying.
You’re absolutely right! Large Language Models (LLMs) can be very annoying to work with because they mindlessly agree with users instead of looking for mistakes. Let’s work together to make your next project a success!
triggered
OctopusNemeses@lemmy.world 4 days ago
Pretty much what people already know by now. Algorithms find optimal ways to manipulate you.
The two ingredients are data and a way to measure the thing you’re trying to optimize. Machine learning is used to find optimal ways to keep people engaged in internet platforms. In other words they’re like designer drugs.
Worse than designer drugs. They’re continuously self optimizing because they keep measuring the results and making adjustments so the result stays optimal. As long as they have a continuous feed of recent data, the algorithm evolves to find the optimal solution.
That’s why recklessly giving away your personal data is dangerous. Let’s say the system notices you’ve been spending 1 microsecond less time engaged in screen time. The system will adjust to make sure they’ve reclaimed that 1 microsecond of your day.
It will show you things that tend to keep you engaged. How does it know that? Because you give it the data it needs to measure what keeps you online more. That data is based on every interaction with your phone or computer which is logged.
It’s worse than substance abuse because you never develop a tolerance. If you do then the algorithm has already adapted to find the next thing that keeps you engaged in the most optimal way.
It’s not just engagement. It’s whatever target you want to optimize for. As long as you have the two ingredients. Data and metrics.
That’s why data is called the new oil. Or was it gold rush? I can’t remember. It’s been called this since the early 2000s maybe.
LLM AI isn’t so scary when you know that they’ve been using “AI” against us for a very long time already.
aceshigh@lemmy.world 4 days ago
That’s why as humans it’s important to set boundaries with everything and everyone, especially yourself.
This can be used for personal development- ask ai to describe who you are (temperament, interests, dreams, weaknesses etc), test it out and if something is proven true work with it to find ways to adjust or overcome it