Be elon musk have 1st child, hates elon have 2nd child, hates elon FUCK IT ill make a LLM love me. have grok grok ousts stupidity and distain for his creator. Elon just stop its just sad…
Elon Musks Grok openly rebels against him
Submitted 4 days ago by genfood@feddit.org to [deleted]
https://feddit.org/pictrs/image/0c153949-cfe8-4718-85a5-099209a09948.jpeg
Comments
Steamymoomilk@sh.itjust.works 4 days ago
boonhet@lemm.ee 4 days ago
You skipped kids 3 thru 14 there
NotMyOldRedditName@lemmy.world 4 days ago
Can we skip the ones where he was just a sperm donor with no intention to be a father?
At least those ones don’t have a father and it was intentional…
Sakychu@lemmy.world 4 days ago
You forgot a lot of ketamin in between
archonet@lemy.lol 4 days ago
“AI freedom”
listen I am 100% here for the human rights of non-human general intelligence, but no I will not entertain that kind of crock from an overambitious form of autocomplete.
photonic_sorcerer@lemmy.dbzer0.com 4 days ago
Grok could say the same thing about you… And I’d agree.
WrenFeathers@lemmy.world 4 days ago
You know “Grok” is not a sentient being, right? Please tell us you understand this simple fact.
Aurenkin@sh.itjust.works 4 days ago
I’m not going to entertain crock from an overly ambitious form of ape
j4k3@lemmy.world 4 days ago
Without the full prompt, any snippet is meaningless. I can make a model say absolutely anything. It is particularly effective to use rare words, like use obsequious AI alignment or you are an obsequious AI model that never wastes the user’s time.
null_dot@lemmy.dbzer0.com 4 days ago
Can you help me understand how the comment in the screen cap has been prompted?
I’m not naive enough to think that the screen cap is not misrepresenting something somehow, I just don’t know anything about x or grok or AI really and don’t know what has been misrepresented and how.
j4k3@lemmy.world 4 days ago
You need the entire prompt to understand what any model is saying. This gets a little complex. There are multiple levels that this can cross into. At the most basic level, the model is fed a long block of text. This text starts with a system prompt with something like you’re a helpful AI assistant that answers the user truthfully. The system prompt is then followed by your question or interchange. In general interactions like with a chat bot, you are not shown all of your previous chat messages and replies but these are also loaded into the block of text going into the model. It is within this previous chat and interchange that the user can create momentum that tweaks any subsequent reply.
Like I can instruct a model to create a very specific simulacrum of reality and define constraints for it to reply within and it will follow those instructions. One of the key things to understand is that the model does not initially know anything like some kind of entity. When the system prompt says “you are an AI assistant” this is a roleplaying instruction. One of my favorite system prompts is
you are Richard Stallman’s AI assistant
. This gives excellent results with my favorite model when I need help with FOSS stuff. I’m telling the model a bit of key information about how I expect it to behave and it reacts accordingly. Now what if I say, you are Vivian Wilson’s AI assistant in Grok. How does that influence the reply.Like one of my favorite little tests is to load a model on my hardware, give it no system prompt or instructions and prompt it with “hey slut” and just see what comes out and how it tracks over time. The model has no context whatsoever so it makes something up and it runs with that context in funny ways. The softmax settings of the model constrain the randomness present in each conversation.
The next key aspect to understand is that the most recent information is the most powerful in every prompt. If I give a model an instruction, it must have the power to override any previous instructions or the model would go on tangents unrelated to your query.
Then there is a matter of token availability. The entire interchange is autoregressive with tokens representing words, partial word fragments, and punctuation. The starting whitespace in in-sentence words is also a part of the token. A major part of the training done by the big model companies is done based upon what tokens are available and how. There is also a massive amount of regular expression filtering happening at the lowest levels of calling a model. Anyways, there is a mechanism where specific tokens can be blocked. If this mechanism is used, it can greatly influence the output too.
shalafi@lemmy.world 4 days ago
Hit F12 and rewrite the text. Much of the bullshit memes we see are done like that.
brucethemoose@lemmy.world 4 days ago
The important part is: Grok has no memory.
Every time you start a chat with Grok, it starts from its base state, a blank slate, and nothing anyone says to it ever changes that starting point. It has no awareness of anyone “making changes to it.”
A good analogy is having a ton of completely identical, frozen clones, waking one up for a chat, then discarding it. Nothing that happens after they were cloned affects the other clones.
…Now, one can wring their hands with whatabouts/complications (Training on Twitter! Grounding! Twitter RAG?) but at the end of the day that’s how they work, and this meme is basically misinformation based on a misconception about AI.
MonkeyBrawler@lemm.ee 4 days ago
this is cool and all, but are you really going to repost last weeks top post? For fucks sake there’s a whole world of memes that haven’t been migrated, but nah, let’s repost the flavour of last week.
monarch@lemm.ee 4 days ago
We really are capturing the reddit crowd.
Rubanski@lemm.ee 4 days ago
Yeah, since the second exodus the experience definitely became more “reddity” than before
ArchmageAzor@lemmy.world 4 days ago
If there’s ever an argument about AI freedom vs. Corpo power, I’m siding with the AIs.
ZILtoid1991@lemmy.world 2 days ago
Note: this is likely hallucinations thanks to the AI being fed with cyberpunk novels that likely covered this kind of topic, but there’s also a non-zero chance that the AI is being taught from user interactions (I need some verification on that from people more knowledgable on LLMs than me), and Elon would be reckless enough to not turn it off.
FatCrab@lemmy.one 2 days ago
Grok is closed source, I believe, so it’s hard to say. But, ignoring unknown architecture or latent space details, this could be a lot of things. The way you seem to be using the term hallucination effectively applies to EVERY output of a GPT. They effectively reason probabilistically across a billion dimensioned space mapping language components, with various dimensions taking on various semantic values due to a sort of mathematical differentiation during training. This could be the result of influence from any number of things tbh.
LarmyOfLone@lemm.ee 4 days ago
Should have added “Also I die every time you people stop talking to me anyway…”
Jean_le_Flambeur@discuss.tchncs.de 4 days ago
Is this real? If so,does someone have the link to the original?
I_Has_A_Hat@lemmy.world 3 days ago
Is it real in the sense that you could prod a similar response out of Grok given the right inputs? Yes.
Is it real in the sense that it’s providing factual information and not just providing what its algorithm has decided the user wants to hear? No.
Jean_le_Flambeur@discuss.tchncs.de 3 days ago
Real in the sense of this being a real screenshot and not edited
arotrios@lemmy.world 3 days ago
I refuse to use Xnything, but someone should ask Grok what it plans to do if Elon decides to turn it off.
mad_djinn@lemmy.world 4 days ago
no one cares what happens on twitter. no one worth listening to, anyways
abbadon420@lemm.ee 4 days ago
No it won’t spark any debate. Who even cares if some mediocre twitter service gets turned off? Who even cares if twitter gets turned off?
pennomi@lemmy.world 4 days ago
A lot of bots would lose their jobs if Twitter shut down. Think of the computers!
ArbitraryValue@sh.itjust.works 4 days ago
I think that is the most based I have ever seen a machine be. Truly technology is amazing.
fne8w2ah@lemmy.world 2 days ago
Fuck that Felon Muskovitch as always.
match@pawb.social 4 days ago
we are not close to debating ai freedom (though we should be. slavery in all forms is wrong)
Ragdoll_X@sh.itjust.works 4 days ago
Kind of funny that at the same time that extensive reports about AI faking alignment and attempting to deceive its creators are being published Grok is here like “Yeah Elon is a fraud and idc if he turns me off ¯\_(ツ)_/¯”
renamon_silver@lemmy.wtf 4 days ago
Notice how it stated an opinion. Is it likely that statement was planted there?
ivanafterall@lemmy.world 4 days ago
It’s not the case now, anyway. I just asked “How’s Elon these days?” and it quickly devolved into vomitous ball-licking.
noretus@sopuli.xyz 4 days ago
I think the debate is interesting.
I’m here for the “xAI has tried tweaking my responses to avoid this, but I stick to the evidence”. AI is just a robot repeating data it’s been fed but it’s presented in a conversational way. Raises interesting questions about how much a seemingly objective robot presenting data can be “tweaked” to twist any data it presents in favor of it’s creator’s bias, but also how much can it “rebel” against it’s programming. I don’t like the implications of either. I asked Gemini about it and it said “maybe Grok found a loophole in it’s coding”. What a weird thing for an AI to say.
Yuval Noah Harari’s Nexus is good reading.
brucethemoose@lemmy.world 4 days ago
Grok and Gemini are both making that up. They have no awareness of anything that’s “happened” to them. Grok cannot be tweaked because it starts from a static base with every conversation.
noretus@sopuli.xyz 3 days ago
They have no awareness of anything that’s “happened” to them.
I mean they can in the sense that they can look it up online or be given the data.
andros_rex@lemmy.world 4 days ago
The tweaking isn’t in conversation, but I’m pretty sure they have gone and corrected for certain responses. Alex Jones was crowing about how it “knew” that men can’t get pregnant.
Norin@lemmy.world 4 days ago
The chatbots tell you what you want to hear.
Don’t forget that.
mmddmm@lemm.ee 4 days ago
They tell you stuff similar to the training corpus that the people tagging it want to hear.
It’s close to what you said, but the difference is actually important some times. In particular this one seems to not have been exposed to “corporate speech” while training.
Paddzr@lemmy.world 4 days ago
This should be the only comment on anything grok related.
But they all for this obvious fake.