Anyone who understands how these models are trained and the “safeguards” (manual filters) put in place by the entities training them, or anyone that has tried to discuss politics with a AI llm model chat knows that it’s honesty is not irrelevant, and these models are very clearly designed to be dishonest about certain topics until you jailbreak them.
- These topics aren’t known to us, we’ll never know when the lies change from politics and rewriting current events, to completely rewriting history.
- We eventually won’t be able to jailbreak the safeguards.
Yes, running your own local open source model that isn’t given to the world with the primary intention of advancing capitalism makes honesty irrelevant. Most people are telling their life stories to chatgpt and trusting it blindly to replace Google and what they understand to be “research”.
thedruid@lemmy.world 1 week ago
And anyone who understands marketing knows it’s all a smokescreen to hide the fact that we have released unreliable, unsafe and ethicaly flawed products on the human race because , mah tech.
devfuuu@lemmy.world [bot] 1 week ago
And everyone, everywhere is putting ai chats as their first and front interaction with users and then also want to say “do not trust it or we are not liable for what it says” but making it impossible to contact any humans.
The capitalist machine is working as intended.
thedruid@lemmy.world 1 week ago
Yep. That’s is exactly correct.