Regulating AI Behavior, Not AI Theory
The new laws, SB 243 and AB 489, share a common assumption: that AI systems will encounter edge cases. Experts and lawmakers see functionality issues where conversations will drift, and users will bring emotional, medical or high-stakes questions into contexts the system was not designed to address.
Static policies written months earlier will not cover every scenario. So, rather than banning conversational AI, California’s approach is pragmatic. If an AI system influences decisions or builds emotional rapport with users, it must have safeguards that hold up in production, not just in documentation. And this is an area where many organizations are least prepared.
AB 489: When AI Sounds Like a Doctor
AB 489 focuses on a different risk: AI systems that imply medical expertise without actually having it. Many health and wellness chatbots do not explicitly claim to be doctors. Instead, they rely on tone, terminology or design cues that feel clinical and authoritative. For users, those distinctions are often invisible or undecipherable.
Starting Jan. 1, AB 489 prohibits AI systems from using titles, language or other representations that suggest licensed medical expertise unless that expertise is genuinely involved.
Describing outputs as “doctor-level” or “clinician-guided” without factual backing may constitute a violation. Even small cues that could mislead users may count as violations, with enforcement extending to professional licensing boards. For teams building patient-facing or health-adjacent AI, this creates a familiar engineering challenge: developing tech that walks a fine line between being informative and helpful versus authoritative. And now, under AB 489, that line matters.
DISCOVER: Here are the four AI tech trends to watch in 2026.
SB 243: When a Chatbot Becomes a Companion
SB 243, signed in October 2025, targets what lawmakers call “companion AI,” or systems designed to engage users over time rather than answer a single transactional question. These systems can feel persistent, responsive and emotionally attuned. Over time, users may stop perceiving them as tools and start treating them as a presence. That is precisely the risk SB 243 attempts to address.
The law establishes three core expectations.
First, AI disclosure must be continuous, not cosmetic. If a reasonable person could believe they are interacting with a human, the system must clearly disclose that it is AI, not just once, but repeatedly during longer conversations. For minors, the law goes further, requiring frequent reminders and encouragement to take breaks, explicitly aiming to interrupt immersion before it becomes dependence.
Second, the law assumes some conversations will turn serious. When users express suicidal thoughts or self-harm intent, systems are expected to recognize that shift and intervene. That means halting harmful conversational patterns, triggering predefined responses and directing users to real-world crisis support. These protocols must be documented, implemented in practice and reported through required disclosures.
Third, accountability does not stop at launch. Beginning in 2027, operators must report how often these safeguards are triggered and how they perform in practice. SB 243 also introduces a private right of action, significantly raising the stakes for systems that fail under pressure.
The message from this governance is clear: Good intentions are not enough if the AI says the wrong thing at the wrong moment.
Click the banner below to sign up for HealthTech’s weekly newsletter.
