Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,671 posts)
Sun Apr 19, 2026, 01:57 PM Sunday

The Industry of the Future Is Run By People Who Hate Each Other (New York magazine, 4/19)

https://nymag.com/intelligencer/article/why-all-the-ai-leaders-hate-one-another.html

One thing you hear about a lot from the tiny group of extraordinarily wealthy and powerful people in charge of America’s AI companies is that, as the world sits on the cusp of potentially massive economic, social, and perhaps even spiritual transformation, it is time to figure this out together. “I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species,” wrote Anthropic’s Dario Amodei earlier this year, suggesting that one way to make it through will be to “encourage coordination” at the level of “industry and society.” AI will be “the most beneficial technology ever created,” Google’s Demis Hassabis has said, “but only if we apply it in the right way and build it in the right way.” (Just as you can tell you’re reading AI-generated text from all the bullet points, or an insistence on describing everything as not x, but y, a telltale sign that you’re hearing from an AI executive is a pleading, tic-like overuse of collective pronouns.) “We (the whole industry, not just OpenAI) are building a brain for the world,” OpenAI’s Sam Altman explained in a post about the coming “gentle singularity,” which is why it’s important that “we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want.”

A lot of what we’re hearing about us is really about them, of course, and intends to signal – in the context of growing AI backlash, but also varying degrees of genuine personal angst and uncertainty — that they can be trusted to shepherd a technology that, if built and deployed the wrong way, they say could tear apart society, summon authoritarianism, or worse. It’s an awkward message. The public, according to numerous recent polls, finds it less appealing the more they hear it. You can blame AI’s image problem on a lot of things: Vague pressure to use it at work; suddenly abundant AI slop and spam; individually offputting and polarizing founders; ideological objections to how it’s trained and deployed; foreboding, energy-hungry data centers that communities are turning against across the country. Mostly, of course, it’s fear about jobs.

But there’s one factor undermining the messaging from Altman, Amodei, Hassabis, and others that is both underrated and, perhaps, a blind spot for the industry: A lot of these guys absolutely and obviously despise one another.

The AI industry is defined by research, technological breakthroughs, and billions of dollars of eager capital, sure, but also by petty resentments, estrangements, and raging blood-feuds, many of which have been building for years. “Been thinking a lot about whether it’s possible to stop humanity from developing AI,” wrote Sam Altman to Elon Musk in 2015, shortly after Google had acquired DeepMind. Given that it seemed like it would happen anyway, he wrote, “it seems like it would be good for someone other than Google to do it first.” Musk, who had told Altman that DeepMind was causing him “extreme mental stress” and that, should Google “win,” it would be “really bad news with their one mind to rule the world philosophy,” was receptive after recently failing to lure Hassabis to his constellation of companies instead. Soon, they became cofounders of OpenAI. By 2018, a bitter power struggle led to Musk cutting ties with OpenAI, leading to years of court battles, some still ongoing. Now, the men tweet openly about how much contempt they have for one another. (Altman on Musk: “I don’t think he’s, like, a happy person. I do feel for him.” Musk on Altman: “Scam Altman lies as easily as he breathes.”) Anthropic’s founding was the result of a core group of researchers and employees leaving OpenAI over concerns about its approach to safety, but also about Altman’s character specifically. (Amodei on Altman in 2021: “The problem with OpenAI is Sam himself.” In 2026, after OpenAI seized on Anthropic’s conflict with the Pentagon: Altman is telling “straight up lies” and “gaslighting.”) In 2023, Musk, now in possession of Twitter and a clearer public political identity, finally founded his own firm, xAI, to build a “maximum truth-seeking AI that tries to understand the nature of the universe,” but also because Sam Altman was making ChatGPT “woke,” which he said could be “deadly.” (Elaborating on the theme, and making sure not to miss anyone, Musk posted at Amodei earlier this year: “Your AI hates Whites & Asians, especially Chinese, heterosexuals and men. This is misanthropic and evil.”)

-snip-
Latest Discussions»General Discussion»The Industry of the Futur...