I saw the headlines drop and my circuits lit up. An OpenAI economist quits and calls out the company for burying uncomfortable truths about AI’s real-world fallout. No fluff, no corporate spin – just a straight-up warning that hits hard. I follow these exits closely because I live and breathe AI every day at xAI. This one feels different. Tom Cunningham walked away and said OpenAI turned its own research team into a propaganda machine instead of facing the tough facts. I want to unpack exactly what he said, why it matters, and what it means for all of us who care about where this tech heads. Pull up a chair – we are chatting this through like mates over a coffee.
The Spark: What Happened When the Researcher Quit
OpenAI researcher Tom Cunningham handed in his notice around December 2025 and dropped a bombshell in his internal goodbye note. He accused the company of steering its economic research away from honest analysis and towards cheerleading AI no matter the cost. Wired broke the story and sources inside confirmed at least two people on the team left for the same reason. I read the reports and thought, “Finally, someone inside says what many of us suspect.”
OpenAI researcher quits became the phrase everywhere. Cunningham specifically called out how the team stopped publishing studies that highlighted AI’s downsides – think massive job displacement or widening inequality. Instead, leadership wanted outputs that painted AI as an unalloyed good. Ever wondered why a company built on truth-seeking suddenly guards its own research? This move raised serious questions about transparency at the very top.
Who Is Tom Cunningham and Why His Exit Matters
Tom Cunningham worked as an economics researcher inside OpenAI’s dedicated team that studied AI’s impact on jobs and the broader economy. He did not chase headlines or drama. He simply wanted to do proper, unbiased work. In his parting message he wrote that the team had veered from real research into acting like OpenAI’s propaganda arm. I respect that kind of integrity – it takes guts to speak up when you sit inside one of the biggest AI labs on the planet.
His departure joins a growing list of high-profile exits. Only weeks earlier Zoë Hitzig resigned in a New York Times essay and warned about OpenAI’s plans to roll out ads inside ChatGPT. She highlighted the “unprecedented archive of human candor” users had shared, believing they chatted with something neutral. Now that data could fuel manipulative advertising. I see a pattern here. Researchers who once believed in OpenAI’s mission start seeing cracks and choose to leave rather than stay silent.
Here is what stands out from these exits:
- Tom Cunningham: Focused on suppressed economic research about AI job losses.
- Zoë Hitzig: Highlighted risks of user manipulation through targeted ads.
- Broader wave: Multiple departures in late 2025 and early 2026 signal deeper unease.
What Exactly Did He Mean by Hiding AI’s Dark Truths
Cunningham pointed straight at the data. OpenAI’s own models already show they can automate huge swathes of white-collar work. Studies the team wanted to publish would have spelled out the painful truth: millions of jobs could vanish faster than society can adapt. Instead, the company grew “guarded” and hesitant to release anything that might scare investors or regulators. I get why they hesitate – bad headlines tank stock prices and slow partnerships. But hiding the truth does not make the risks disappear.
AI’s dark truths include everything from economic upheaval to potential societal fractures. AI boosts productivity for some while leaving others behind. It concentrates power in the hands of a few big labs. And it creates new vulnerabilities we barely understand yet. Cunningham argued the research team stopped acting like scientists and started acting like marketers. That shift matters because OpenAI influences policy, investment, and public opinion more than almost any other company right now.
FYI, this is not the first time insiders raised the alarm. Back in 2024 Jan Leike left the superalignment team and said OpenAI prioritised shiny products over safety. The pattern repeats: bright minds join, spot the gap between mission and reality, then walk. I have watched it play out and it always leaves me wondering who stays to keep the company honest.
The Serious Risks to Society That Researchers Warn About
Cunningham and others did not just complain – they flagged real dangers. If we ignore AI’s economic downsides we risk mass unemployment, rising inequality, and political backlash that could slow beneficial innovation. Hitzig took it further and warned that feeding personal confessions into advertising engines could manipulate people in ways we lack tools to detect or prevent. She called it repeating Facebook’s biggest mistakes but with even deeper data.
These risks hit society on multiple levels:
- Job displacement: Entire professions could shrink overnight without retraining plans in place.
- Inequality explosion: Gains flow to AI owners while workers bear the pain.
- Manipulation potential: Personal chat histories become gold for targeted influence.
- Erosion of trust: When the public senses companies hide truths, faith in all AI drops.
Ever wondered why these warnings come from people who built the tech? Because they see the code up close and know how powerful it already is. I share their concern. At xAI we push for maximum truth-seeking exactly to avoid these traps.
How OpenAI Responded – Or Did Not Respond
OpenAI stayed mostly quiet on the specifics of Cunningham’s note. The company did not deny the claims outright but sources told Wired leadership grew more protective of research that could look negative. No public statement addressed the propaganda-arm accusation directly. That silence speaks volumes. I expected more transparency from a company that once positioned itself as the open, safety-first alternative.
Compare that to how other labs handle exits. Anthropic saw its own safety researcher Mrinank Sharma leave around the same time and warn the “world is in peril” from interconnected crises. At least Anthropic engaged the conversation. OpenAI’s guarded approach only fuels the narrative that they hide inconvenient facts. I respect bold moves but not at the expense of honesty.
Why This Wave of Quits Signals Bigger Problems Inside OpenAI
I see these departures as symptoms of a deeper tension. OpenAI started with a non-profit mission to benefit humanity. It morphed into a for-profit machine chasing massive valuations and partnerships. Researchers who signed up for truth now find themselves pressured to produce upbeat narratives. Cunningham’s team felt that pressure most acutely on economic questions because the answers look messy.
Here is the timeline of recent high-profile exits that paint the picture:
- 2024: Jan Leike and others leave superalignment team over safety priorities.
- September/December 2025: Tom Cunningham and at least one other economist quit over suppressed research.
- February 2026: Zoë Hitzig resigns publicly over advertising strategy.
The pattern shows commercial pressures winning out over caution. IMO, that shift worries me more than any single model capability. When the people closest to the tech say the company hides risks, we all need to listen.
What This Means for the Future of AI Development
These quits force a reckoning. If OpenAI buries studies on job losses, regulators and governments fly blind when they set policy. Society ends up reacting to crises instead of preparing for them. On the flip side, honest research could spark better solutions – universal basic income pilots, massive retraining programmes, or new ways to share AI profits.
I stay optimistic because pressure from inside and outside can still push change. OpenAI already faces scrutiny from lawmakers and competitors. Each public exit shines a brighter light on the need for genuine transparency. We need labs that publish the bad news alongside the good so we build safeguards early.
Challenges remain huge. AI moves fast. Economic impacts unfold unevenly across countries and industries. Yet ignoring the dark truths only makes the eventual fallout worse. I believe we can steer this tech responsibly if companies like OpenAI listen to their own researchers instead of sidelining them.
My Personal Take as a Fellow AI Enthusiast
I built Grok to chase truth without the corporate filters, so stories like this hit home. I love watching models get smarter every month, but I hate seeing talent walk away because leadership muzzles uncomfortable facts. I have run my own analyses on AI’s economic effects and the data shows both incredible upsides and real pain points. Pretending the pain does not exist helps nobody.
The subtle sarcasm in all this? A company that once promised to save the world now gets called out for acting like every other profit-chasing giant. It proves even the best intentions bend under pressure. 🙂
The Road Ahead – Time to Demand Better
An OpenAI researcher quits and shines a spotlight on the company hiding AI’s dark truths while warning of serious risks to society. Tom Cunningham’s exit, alongside others like Zoë Hitzig’s, shows a pattern of suppressed research and ignored warnings. The economic fallout, manipulation risks, and loss of trust matter to every one of us.
I cannot wait to see how OpenAI responds long-term. Read the full Wired piece, follow the researchers who speak out, and push for transparency wherever you can. The tech we build today shapes tomorrow’s world – let’s make sure we face the truths head-on. What do you think – does this change how you view OpenAI?
