Skip to content
Back to articles

AI Agents Crashed the Stock Market. Washington Did Nothing.

February 28, 2026ยท11 min readยท2,129 words
AIRegulationWall StreetAI AgentsVideo Summary
CNBC's Deirdre Bosa reporting on AI agents, market disruption, and the collapse of AI safety pledges
Image: Screenshot from YouTube.

Key insights

  • AI agents crossed a capability threshold in early 2026, going from chatbots to autonomous workers that build software and replace entire workflows
  • Companies that promised to self-regulate are abandoning their own safety pledges, each pointing to competitors as the reason they cannot hold the line
  • A $125 million super PAC backed by OpenAI's co-founder is targeting the lawmaker who wrote America's first major AI safety law
SourceYouTube
Published February 28, 2026
CNBC
CNBC
Hosts:Deirdre Bosa
Guest:Alex Bores (New York State Assembly member, running for U.S. Congress)

This article is a summary of AI Just Took Over. No One's In Charge.. Watch the video โ†’

Read this article in norsk


In Brief

In the first two months of 2026, AI went from a chatbot that helps with writing to an autonomous agent that takes over your computer, builds software, and completes hours-long tasks. Wall Street responded with panic: the IGV software ETF (a fund tracking software company stocks) dropped nearly 30%. Meanwhile, the companies that promised to govern themselves are abandoning their own safety pledges. CNBC's Deirdre Bosa goes inside the market sell-off, the safety collapse, and a political fight where a $125 million super PAC is trying to stop the lawmaker who wrote America's first major AI safety law.

~30%
drop in the IGV software ETF in two months
800M
weekly ChatGPT users
$125M
super PAC fighting AI regulation

The third inflection: agents that actually work

Bosa describes what she calls AI's third inflection point (1:26). The first was ChatGPT two years ago. The second was reasoning models about a year ago. The third, happening now, is AI agents (autonomous systems that can take tasks, make decisions, and do actual work without constant human guidance).

To show this, Bosa gave an AI agent a single prompt: make a ten-slide executive-level deck on AI agents in 2026 (2:00). The agent took over her computer, researched, wrote headlines and bullet points, built layouts, and delivered a polished deck. A second prompt restyled the entire thing. Two prompts, no human editing.

This is not a lab demo. Lawyers are using agents to build entire cases. Marketers launch full campaigns. People who have never written a line of code are building software, websites, and apps in an afternoon (2:51). One person Bosa spoke to built in four days what would have taken a team of engineers weeks.

A Pew survey found that Americans are now more concerned than excited about AI in their daily lives, but they are still using it (3:20). ChatGPT has over 800 million weekly users and growing.


The market is pricing in fear

The tension is not just cultural. Wall Street is in sell-off mode, and the carnage has been indiscriminate (3:56).

SectorWhat happened
SaaS (Software as a Service, cloud-based subscription software)Continued sell-off across the sector
GamingUnity, Roblox, Take-Two plunged after Alphabet rolled out new tools
Legal servicesStocks crushed
BrokeragesLatest victims
TruckingSell first, ask questions later
CybersecurityMostly lower after a big tumble

The IGV software ETF dropped nearly 30% in the first two months of 2026 (4:22). Bosa puts it bluntly: the same technology that was supposed to save software companies is now threatening to kill them.

The driver, according to Bosa: AI agents are now being plugged directly into enterprise software through open integrations, with no standard governing what the AI can access or do (4:40). The agents doing PowerPoints in a demo are now inside the systems those companies used to charge for.


Nobody is in charge

Right now, there is no federal law governing AI in the United States. No national standard. No dedicated agency (6:10). The only real AI regulation comes from two state-level efforts: SB-53 in California and the Raise Act in New York.

Enter Alex Bores. He has a master's in computer science, worked at Palantir (a data analytics company used by governments and large corporations), got elected to the New York State Assembly, and wrote the Raise Act (5:37). He is now running for U.S. Congress, and his race has become ground zero for the biggest question in tech: who gets to decide what AI can and cannot do?

For that, he is facing a $125 million super PAC (a political fund that can raise unlimited money to support or oppose candidates) called Leading the Future. It is backed by OpenAI co-founder Greg Brockman, venture capital firm Andreessen Horowitz, and Palantir's Joe Lonsdale (6:21). Their stated argument: regulation will hand the lead to China.

Bores sees it differently. He points out that most of these same people oppose export controls on AI chips to China (6:35). He also notes that China regulates its AI far more than anything proposed in the West, because the CCP is terrified of what an LLM (Large Language Model, the type of AI system behind ChatGPT and Claude) might say or how it might empower its population (17:50).


The safety collapse

The people building this technology are sounding alarms too (7:36).

This year, the head of Anthropic's safeguards research team quit. His resignation letter said the world is in peril. He described the pressure of watching his company weigh its safety values against the race to compete (7:40).

Days later, an OpenAI researcher resigned and wrote a New York Times op-ed titled "OpenAI is Making the Same Mistakes Facebook Made," warning that ChatGPT's user data is now being monetized through advertising (7:51). Another OpenAI employee wrote simply: "I finally feel the existential threat that AI is posing" (8:15).

Then there is Anthropic, the company that was supposed to be the counterweight. Its CEO Dario Amodei founded the company because he thought OpenAI was moving too fast (8:29). Anthropic built its reputation and its valuation on the promise that it would develop AI responsibly.

What happened? Anthropic published research showing its own AI can assist in the creation of chemical weapons. OpenAI found its latest model can help plan biological threats (9:02). And Anthropic has scrapped the core safety pledge it was founded on, replacing hard commitments with what it calls "non-binding, publicly-declared targets," because otherwise competitors could race ahead (9:14).

At the same time, Anthropic is in a public showdown with the Pentagon. The military wants Anthropic to remove guardrails after the company refused to allow its technology to be used for fully autonomous weapons or mass surveillance of Americans. The Pentagon is threatening to blacklist it (9:30).

As Bosa summarizes: Anthropic is simultaneously being pressured by the military to drop its principles and choosing on its own to drop different ones (9:44).


Alex Bores: the lawmaker with $125 million against him

The second half of the episode is a full sit-down interview with Bores. Several points stand out.

The Raise Act targets five companies

The Raise Act would apply to the five largest AI developers: OpenAI, Anthropic, xAI, Google, and Meta (26:30). It requires them to publish safety plans, stick to them, and disclose critical safety incidents (defined as events involving imminent or actual injury or death). Bores calls it a very high bar and a very low standard for what is required (26:54).

The China argument is not made in good faith

Bores argues the "regulation will hand the lead to China" claim rarely holds up to scrutiny. The people making it mostly oppose export controls on chips to China, which would be the most direct way to slow Chinese AI if that were the real concern (17:36). China itself regulates AI far more heavily than anything proposed in the West. And the Raise Act originally included a provision to cover knowledge distillation (a technique where a smaller AI model learns to copy the behavior of a larger one), the exact technique DeepSeek used to catch up to ChatGPT. The accelerationists lobbied to remove that provision (18:06).

The window might already be closing

Bores acknowledges that regulation may already be too late (29:07). But he thinks that is exactly why the super PAC is spending so much in this election cycle: they only need to delay regulation for a few more years before the companies become too powerful to regulate at all. He frames this as a race against a closing window, not a debate about whether to regulate.

Safety research actually accelerates capability

Bores makes a counterintuitive point: some of the biggest capability breakthroughs came from the safety community (22:06). RLHF (Reinforcement Learning from Human Feedback, a training technique that teaches AI to follow instructions by learning from human preferences) came from safety researchers. Chain-of-thought reasoning (where AI models show their step-by-step thinking process) came from safety researchers. He argues the market overvalues short-term gains and undervalues the fundamental research that makes AI trustworthy long-term.

The $125 million PAC wants to make an example

Leading the Future has made clear that if they win this race, they plan to go to every member of Congress and say: "Don't you dare regulate AI, otherwise we'll spend $10 million against you" (26:05). Bores says he believes they will do that regardless, but it carries far less weight if they lose their first race.

Bores uses Claude Code

In a notable moment, Bores reveals that he not only uses AI tools daily but has submitted a pull request to Claude Code's GitHub repository, fixing a performance issue in their iMessage connector (23:37). Bosa points out that daily sell-offs in industry after industry feel connected to Claude Code's capabilities specifically (24:16).


Data centers and the energy question

Bores sees data centers as a potential win-win-win if the incentives are structured correctly (32:55). The U.S. grid is old and in need of repairs. The AI industry has nearly unlimited capital and is willing to pay a premium on time. If data centers are required to bring new renewable energy onto the grid and pay for grid upgrades, the result could be a cleaner, greener, more reliable grid that benefits everyone.

Without those incentives, he warns, data centers either strain the grid and raise utility bills, or go off-grid entirely as Elon Musk did in Tennessee, which can lead to direct pollution (31:40).


How to interpret the governance vacuum

This report arrives at a moment where three trends are converging simultaneously, and the interaction between them matters more than any single one.

The capability jump is real, but distribution is uneven

AI agents that build software from two prompts are impressive. But Bosa's demonstration and the market reaction reveal a gap between what early adopters experience and what most people understand. ChatGPT has 800 million weekly users, but most of them are still using it as a chatbot, not as an autonomous agent. The market is pricing in a future that most users have not yet encountered.

Self-regulation has officially failed

The episode makes this case directly. Every major AI company committed to voluntary safety pledges in 2023 and 2024. Each one included an escape clause: if competitors abandon their pledges, we will too. That is exactly what happened. Anthropic, the company built on the promise of safety-first development, is now replacing binding commitments with non-binding targets. When the company that exists specifically to be the responsible alternative starts weakening its own standards, the voluntary model is over.

The political fight is asymmetric

Bores, a state-level lawmaker, is running for Congress against a $125 million war chest funded by the industry he wants to regulate. The PAC's explicit strategy is to make an example: defeat one regulator so aggressively that no other lawmaker tries. Whether Bores wins or loses, that asymmetry itself is the story. It shows how much the industry is willing to spend to keep the regulatory window from closing.


Glossary

TermDefinition
AGI (Artificial General Intelligence)A hypothetical AI system that can match or exceed human-level performance across all cognitive tasks. Currently does not exist.
AI agentAn AI system that can take tasks, make decisions, and complete multi-step work autonomously, without constant human input.
Chain-of-thought reasoningA technique where AI models show their step-by-step thinking process, which improves accuracy on complex tasks.
Export controlsGovernment restrictions on selling certain technologies (like advanced AI chips) to other countries.
IGViShares Expanded Tech-Software Sector ETF, a fund that tracks U.S. software company stocks.
Knowledge distillationA technique where a smaller AI model learns to mimic the behavior of a larger, more capable model. DeepSeek used this to close the gap with ChatGPT.
LLM (Large Language Model)The type of AI system behind ChatGPT, Claude, and similar tools. Trained on massive text datasets to understand and generate language.
Raise ActNew York State law written by Alex Bores requiring the largest AI developers to publish and maintain safety plans and report critical safety incidents.
RLHF (Reinforcement Learning from Human Feedback)A training technique that teaches AI models to follow instructions by learning from human preferences about what good output looks like.
SaaS (Software as a Service)Software delivered over the internet on a subscription basis (like Salesforce, Slack, or Zoom) rather than installed locally.
SB-53California state law regulating AI development, one of the first in the U.S.
Super PACA political fund that can raise unlimited money to support or oppose candidates, but cannot coordinate directly with campaigns.

Sources and resources