Wile E Coyote, super geeeenius.
Billionaire Mark Zuckerberg writes a manifesto on bringing "personal superintelligence" to everyone to improve humanity, but doesn't even define what superintelligence means.
Submitted 4 days ago by Davriellelouna@lemmy.world to technology@lemmy.world
https://www.meta.com/superintelligence/
Comments
SocialMediaRefugee@lemmy.world 4 days ago
Therobohour@lemmy.world 4 days ago
That’s some big joke bond villain energy
blargle@sh.itjust.works 4 days ago
Ooh I know: it means “smart enough to make Eliezer Yudkowsky think it’s smarter than he is.”
MushuChupacabra@lemmy.world 4 days ago
Can you imagine how much better everything would be if FaceBook was created by Pedro Pascal instead of Zuckerberg?
And I mean everything.
brendansimms@lemmy.world 4 days ago
Done. Zuckerberg is now starring as Joel in The Last of Us.
MushuChupacabra@lemmy.world 4 days ago
Thats very far away from a position of control.
I’ll take it.
Gudl@feddit.org 4 days ago
Zuck the fuck
fubarx@lemmy.world 4 days ago
jjjalljs@ttrpg.network 4 days ago
Any ethical super intelligence would immediately remove billionaires from power. I’d like to see that.
Saledovil@sh.itjust.works 4 days ago
A superintelligence would likely quickly become the sovereign of the earth. And it’s generally a good idea to kill the old elite after conquering a nation, then install new ones. The new ones will like you, because you made them rich, and they’ll fear you, because you killed off all of their predecessors. Of course, there’s also the risk that a super intelligence would just do away with humans in general. But anybody holding significant power right now is much more at risk.
And we can’t forget that we currently can’t even build something that’s actually intelligent, and that a super intelligence might not actually be possible.
Perspectivist@feddit.uk 4 days ago
If AI ends up destroying us, I’d say it’s unlikely to be because it hates us or wants to destroy us per se - more likely it just treats us the way we treat ants. We don’t usually go out of our way to wipe out ant colonies, but if there’s an anthill where we’re putting up a house, we don’t think twice about bulldozing it. Even in the cartoonish “paperclip maximizer” thought experiment, the end of humanity isn’t caused by a malicious AI - it’s caused by a misaligned one.
Perspectivist@feddit.uk 4 days ago
Superintelligence doesn’t imply ethics. It could just as easily be a completely unconscious system that’s simply very, very good at crunching data.
jdnewmil@lemmy.ca 4 days ago
Isn’t it “any algorithm that would impress Dilbert’s Boss”? In the vein of “I don’t have to be faster than the bear… I just have to be faster than you”… /s