❗️Changes in progress! Thank you for your patience.

Scott Wiener Advocates for Transparency in Big Tech’s AI Risks

California’s AI “safety” sequel drops the kill switch and asks for receipts

California State Senator Scott Wiener is back with a second swing at AI safety legislation, and this time he’s not trying to jam a kill switch into every frontier model or make engineers liable for the heat death of the internet. The new bill, as he tells TechCrunch, narrows the scope and leans hard into transparency: if you’re building or shipping powerful AI systems at scale, you’ll have to disclose what they can do, where they can go off the rails, and how you plan to keep them from nuking the proverbial fridge.

Call it the vibes shift from “control” to “disclose.” And it just might work.

Quick rewind: last year’s effort tried to set safety duties for “frontier” developers. It ran into three buzzsaws: ambiguous thresholds, litigation risk, and the nightmare image of a government-mandated “off switch” for AI. The industry piled on, and the bill face-planted.

Version 2.0 looks different. Instead of dictating how to build advanced models, it focuses on what companies must publicly say about them. Think safety report cards: capability testing, red-team outcomes, known failure modes, and mitigation plans before release—plus ongoing incident reporting if models exhibit dangerous behaviors post-deployment. The hook is simple: if you want to sell your big, shiny model in California—or deploy it at scale—you have to level with the public and regulators about material risks.

The bill aims squarely at the top of the stack, not a grad student fine-tuning a llama in a garage. Expect thresholds tied to compute size, scale of deployment, or specific “dangerous capabilities” (autonomous replication, cyber exploit generation, bio-design assistance, and other charming categories that keep national security folks up at night). It’s broadly aligned with the federal posture—Biden’s AI Executive Order already requires companies training major models above certain compute levels to notify Commerce and share safety test results—so this isn’t California inventing a brand new wheel so much as putting a state inspection sticker on the fender.

Why this could actually pass:

It ditches the most radioactive provisions (read: no kill-switch mandate, no blanket strict liability).
It plays to California’s strength: market leverage. When Sacramento sets disclosure rules, companies tend to adopt them nationally. Ask CCPA and every cookie banner you’ve rage-clicked.
It’s politically defensible. Transparency is hard to demagogue when the alternatives are “trust us” or “ship first, apologize later.”
It harmonizes with where DC and Brussels are headed. The EU AI Act demands risk classification, documentation, and post-market monitoring; this bill localizes that impulse with fewer jurisdictional knots.

Will companies howl anyway? Uhhh… absolutely! Expect the usual greatest hits of “innovation chilling,” “compliance burden,” and “security-through-obscurity.” But the smart read inside the labs is that they’re already building this muscle. Safety eval teams exist. Red teaming is table stakes. System cards, model specs, and “nutrition labels” are creeping toward standard practice because enterprise customers demand them. If anything, a clear(ish) state rule smooths the procurement grind—and reduces the game of 50-state regulatory roulette.

What’s in the fine print, however, is what truly matters. If the thresholds are too low, every SaaS founder with a vision board gets sucked into process hell. Too high, and only three companies have to comply, which neuters the point. If disclosures are performative PDFs that bury the lede on page 76, congratulations: you’ve reinvented “transparency theater.” If they reveal exploit-enabling details, you’ve created a DIY kit for the worst people online. The bill’s success hinges on threading that needle: standardized, comparable disclosures that surface real risk without handing out recipes.

A few second-order effects to watch:

The California Effect: Again, if this passes, you’ll see “safety spec sheets” become a standard artifact—like SOC 2 or model cards—baked into enterprise RFPs and partner agreements whether you sell in Fresno or Frankfurt.
Tooling Gold Rush: Vendors that help run evals, track incidents, and generate compliant safety docs will thrive. If you think governance tooling was hot, wait until there’s a statutory deadline attached.
Model Stratification: Frontier labs will absorb the paperwork competently. Mid-tier players will either consolidate, rely more heavily on foundation model platforms, or pivot to narrower vertical models to stay under thresholds.
Real Accountability vs. PR: Incident reporting is only meaningful if there are enforcement teeth—civil penalties, AG oversight, and the ability to audit or compel corrective action. Otherwise, prepare for glossy “we take safety seriously” PDFs with zero consequence.

Compared to the EU’s sprawling risk taxonomy and 7%-of-global-revenue fines, California is going for a simpler gambit: make the biggest builders say the quiet part out loud, on the record, before and after they ship. It won’t stop unsafe systems by itself. But disclosure changes incentives. Public risk statements can be cited by regulators, plaintiff’s attorneys, and—most importantly—customers who don’t want to be beta testers for catastrophe. The minute a model’s “known issues” don’t match its behavior in the wild, those receipts age like milk.

Is this going to “kill AI in California”? Spare me. California still sucks up a massive chunk of U.S. AI venture dollars, and the Bay Area remains the gravitational center for top talent. If complying with a safety report is the hill on which the cutting edge of AI dies, then the cutting edge wasn’t that sharp.

Wiener’s bet is that basic transparency and incident reporting are the floor for responsible deployment, not the ceiling—and that forcing Big Tech to publish what it knows about model risk will nudge the ecosystem toward adult supervision. It’s not sexy regulation. It’s not a panacea. But it’s probably the most California way to govern AI: make the people with the power write it down, sign it, and live with it. (via TechCrunch)

No Comments

Post a Comment

Categories