Comment on AI safety leader says 'world is in peril' and quits to study poetry
Techlos@lemmy.dbzer0.com 4 days ago
I’m going to throw my own thoughts in on this. I got into machine learning around 2015, back when relu activations were still bleeding edge innovations, and got out around 2020 for honestly pretty similar reasons.
Emotions can and have been used as optimisation targets. Engagement is an ever present target. And in the framework of capitalism, one optimisation targets rules above all others; alignment with continued use. It’s part of what leads to the bootlicking LLM phenomenon. For the average human, it drives future engagement.
The real danger isn’t the newer language models, or anything really to do with neural net architecture; rather, it’s the fact that we’ve found that a simple function minimisation strategy can be used to approximate otherwise intractable functions. The deeper you research, the more clear it becomes that any arbitrary objective can be optimised, given a suitable function approximator and enough data to fit the approximator accurately.
Human minds are also universal function approximators.
new_guy@lemmy.world 4 days ago
Can you eli5?
OctopusNemeses@lemmy.world 4 days ago
Pretty much what people already know by now. Algorithms find optimal ways to manipulate you.
The two ingredients are data and a way to measure the thing you’re trying to optimize. Machine learning is used to find optimal ways to keep people engaged in internet platforms. In other words they’re like designer drugs.
Worse than designer drugs. They’re continuously self optimizing because they keep measuring the results and making adjustments so the result stays optimal. As long as they have a continuous feed of recent data, the algorithm evolves to find the optimal solution.
That’s why recklessly giving away your personal data is dangerous. Let’s say the system notices you’ve been spending 1 microsecond less time engaged in screen time. The system will adjust to make sure they’ve reclaimed that 1 microsecond of your day.
It will show you things that tend to keep you engaged. How does it know that? Because you give it the data it needs to measure what keeps you online more. That data is based on every interaction with your phone or computer which is logged.
It’s worse than substance abuse because you never develop a tolerance. If you do then the algorithm has already adapted to find the next thing that keeps you engaged in the most optimal way.
It’s not just engagement. It’s whatever target you want to optimize for. As long as you have the two ingredients. Data and metrics.
That’s why data is called the new oil. Or was it gold rush? I can’t remember. It’s been called this since the early 2000s maybe.
LLM AI isn’t so scary when you know that they’ve been using “AI” against us for a very long time already.
aceshigh@lemmy.world 4 days ago
That’s why as humans it’s important to set boundaries with everything and everyone, especially yourself.
This can be used for personal development- ask ai to describe who you are (temperament, interests, dreams, weaknesses etc), test it out and if something is proven true work with it to find ways to adjust or overcome it
LeninsOvaries@lemmy.cafe 4 days ago
They want you talking to ChatGPT all day, and they don’t care how it gets done. They tell the AI to find the easiest way to do it.
The easiest way is to psychologically abuse you.
fossilesque@mander.xyz 4 days ago
Ironically the more sycophantic I get, the less useful I find it for what I need to do, like building. I want a tool that can catch mistakes early, it’s so incredibly annoying.
LeninsOvaries@lemmy.cafe 1 day ago
You’re absolutely right! Large Language Models (LLMs) can be very annoying to work with because they mindlessly agree with users instead of looking for mistakes. Let’s work together to make your next project a success!