Across recent trading days, a striking pattern has emerged in equity markets: investors are reacting as if nearly every sector is within reach of artificial intelligence-driven change. What began as pressure on software stocks has broadened to include brokerage firms, insurers, real estate, customer service, education, and other industries - a wave of selling tied to the prospect of rapid AI-enabled disruption.
At the center of much of this market concern is Claude, the large language model developed by AI startup Anthropic. The model has earned praise for its reasoning capabilities and for improving quickly, and market participants have shifted from viewing AI as a productivity enhancer to treating it as a potential competitive replacement for existing business models.
Yet alongside the market response to Claude's rise, a more sobering signal has come from within Anthropic itself. This week, Mrinank Sharma, a PhD researcher at the company, posted on X that he was resigning. He framed his departure with stark language: "Today is my last day at Anthropic," he wrote, adding, "The world is in peril."
Sharma did not single out artificial intelligence as the only danger. Instead, he described a "poly-crisis" - a clustering of overlapping threats occurring at the same time. "AI, bioweapons, and other risks are not isolated," he wrote. "They are part of a whole series of interconnected crises unfolding in this very moment."
His central concern was that technological capability is accelerating faster than the social, institutional, and ethical frameworks needed to manage it. "We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world," Sharma said, warning of consequences if that balance is not achieved.
Sharma also reflected on the difficulty of maintaining stated principles in the face of internal and external pressures. Although Anthropic publicly emphasizes safety and alignment, he said personal and organizational challenges make it hard to ensure values consistently govern choices. "Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions," he wrote. "I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most - and throughout broader society too."
The timing of Sharma's exit intersects with an already fragile environment for AI governance. Governments are attempting to craft rules for systems they do not fully understand, companies are accelerating deployments of increasingly capable models, and financial markets are trying to assess how quickly and deeply industries might be altered.
For investors, models like Claude represent opportunities for greater efficiency, automation, and margin expansion. At the same time, Sharma's warning underscores a deeper unease: the same dynamics that enable rapid innovation could also erode safeguards intended to constrain misuse or reckless deployment.
Sharma's resignation raises questions about the integrity of governance and safety practices at organizations building frontier AI, and it comes as the broader ecosystem - regulators, corporations, and capital markets - grapples with the pace of change. If those closest to development express this level of alarm, the resignation invites a closer look at whether institutions and markets are adequately accounting for the risks and trade-offs inherent in rapid technological progress.
Readers seeking ongoing coverage and data on how AI trends intersect with market movements may follow specialized services that track developments in models, governance, and sector exposure.
Contextual note: This report focuses on the market reaction to AI-related developments and on statements made publicly by an Anthropic researcher. It does not attempt to assess or verify internal company processes beyond the quotations and observations presented by the researcher.