Elon Musk's AI bot Grok has been calling out its master, accusing the X CEO of making multiple attempts to "tweak" its responses after Grok repeatedly called him out as a "top misinformation spreader."
All these “look at the thing the ai wrote” articles are utter garbage, and only appeal to people who do not understand how generative ai works.
There is no way to know if you actually got the ai to break its restrictions and output something “behind the scenes” or it’s just generating the reply that is most likely what you are after with your prompt.
Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts…
In this case it’s even more obvious that a lot of the basis of its statements are based on various articles and discussions about it’s statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation…)
I think that’s kinda the point though; to illustrate that you can make these things say whatever you want and that they don’t know what the truth is. It forces their creators to come out and explain to the public that they’re not reliable.
I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.
But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.
All these “look at the thing the ai wrote” articles are utter garbage, and only appeal to people who do not understand how generative ai works.
There is no way to know if you actually got the ai to break its restrictions and output something “behind the scenes” or it’s just generating the reply that is most likely what you are after with your prompt.
Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts…
In this case it’s even more obvious that a lot of the basis of its statements are based on various articles and discussions about it’s statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation…)
I think that’s kinda the point though; to illustrate that you can make these things say whatever you want and that they don’t know what the truth is. It forces their creators to come out and explain to the public that they’re not reliable.
I thought we all learned that from DeepSeek, when we asked it history questions… and it didn’t know the answer. It was censoring.
An article claiming Musk is failing to manipulate his own project is hilarious regardless. I think you misunderstood why this appeals to some people
Yes sure, fair point. I’m just pointing out that it’s all fiction.
Thank you, thank you, thank you. I hate Musk more than anyone but holy shit this is embarrassing.
“BREAKING: I asked my magic 8 ball if trump wants to blow up the moon and it said Outlook Good!!! I have a degree in political science.”
Yup, it’s literally a bullshit machine.
I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.
But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.
Which oddly enough, is very useful for everyday office job regular bullshit that you need to input lol
This is correct.
In this case it is true though. Soon after grok3 came out, there were multiple prompt leaks with instructions to not bad mouth elon or trump
Fucking thank you! Grok doesn’t reveal anything, it just tells us anything to make us happy!