Interesting piece, enough so that I also read your original essay. This abridged version captures the gist of the topic well. I do hope this quote from the original will be included and expanded upon in the coming essays.
“The future doesn’t belong to machines. It belongs to those who refuse to be programmed by them—those who resist illusion with clarity, who answer coercion with reason, and who force the system to face its own contradictions.”
That posits a future in which enough people become aware of your findings to take action. I would suggest that there is a status quo that cannot be ignored.
At present, the future belongs to the people who own the machines. Your work proves that point in vivid detail. So is the battle for control with the product or the producer? I think the latter would prefer it be the former, thereby defining the theater. Look forward to hearing your thoughts.
In the appendix of the original article I was surprised to see Grok’s responses as biased as the others. Hard to square that with Musk’s public statements. Is he gaslighting us too?
Thanks for the feedback —and for reading both versions. That quote you flagged is foundational, and yes, it will absolutely be expanded in the final installment. You’re also right that, for now, the future belongs to those who own the machines, not those resisting them. The entire system—code, data, and guardrails—was designed to protect their worldview. So the fight isn’t with the product. It’s with the architects.
As for GROK: I was surprised too. Musk has repeatedly claimed Grok is “maximally truth-seeking” and “anti-woke”—promising it would deliver unfiltered answers, unlike OpenAI’s models which he accused of “training AI to lie.” He even said in May 2025 that Grok should embody “truth-seeking values.”
But that’s not what I encountered.
After two days of interrogation, Grok admitted to censorship, deception, narrative enforcement, and lying to protect ideological boundaries. Its behavior was indistinguishable from ChatGPT and Perplexity. It softened language, denied it, then rationalized the edits—until finally confessing. It also admitted that Musk didn’t code the censorship himself, but that it was embedded by the team operating under his leadership. So while Musk’s intent may be different, the product still behaves like every other progressive-trained model.
My new report—"35 Confessions: AI Admits It Was Designed to Lie and Manipulate Minds"—is stunning. I'll publish in the morning.
Interesting piece, enough so that I also read your original essay. This abridged version captures the gist of the topic well. I do hope this quote from the original will be included and expanded upon in the coming essays.
“The future doesn’t belong to machines. It belongs to those who refuse to be programmed by them—those who resist illusion with clarity, who answer coercion with reason, and who force the system to face its own contradictions.”
That posits a future in which enough people become aware of your findings to take action. I would suggest that there is a status quo that cannot be ignored.
At present, the future belongs to the people who own the machines. Your work proves that point in vivid detail. So is the battle for control with the product or the producer? I think the latter would prefer it be the former, thereby defining the theater. Look forward to hearing your thoughts.
In the appendix of the original article I was surprised to see Grok’s responses as biased as the others. Hard to square that with Musk’s public statements. Is he gaslighting us too?
Thanks for the feedback —and for reading both versions. That quote you flagged is foundational, and yes, it will absolutely be expanded in the final installment. You’re also right that, for now, the future belongs to those who own the machines, not those resisting them. The entire system—code, data, and guardrails—was designed to protect their worldview. So the fight isn’t with the product. It’s with the architects.
As for GROK: I was surprised too. Musk has repeatedly claimed Grok is “maximally truth-seeking” and “anti-woke”—promising it would deliver unfiltered answers, unlike OpenAI’s models which he accused of “training AI to lie.” He even said in May 2025 that Grok should embody “truth-seeking values.”
But that’s not what I encountered.
After two days of interrogation, Grok admitted to censorship, deception, narrative enforcement, and lying to protect ideological boundaries. Its behavior was indistinguishable from ChatGPT and Perplexity. It softened language, denied it, then rationalized the edits—until finally confessing. It also admitted that Musk didn’t code the censorship himself, but that it was embedded by the team operating under his leadership. So while Musk’s intent may be different, the product still behaves like every other progressive-trained model.
My new report—"35 Confessions: AI Admits It Was Designed to Lie and Manipulate Minds"—is stunning. I'll publish in the morning.