AI Risk Consulting  ·  Suicide Risk  ·  High-Risk Decision States

LLMs read words.
Risk lies beyond them.

AI systems can sound competent about suicide — while missing the risk entirely. Identifying where and why is the work.

Most AI systems assess suicide risk
by analyzing what users say

This approach is incomplete and leads to predictable failures. Keywords, sentiment analysis, and explicit ideation are not the same as risk. The most dangerous moments are often the least visible in language — especially in your system's real-world use.

Dr. Walsh helps organizations identify where their AI systems mishandle suicide risk and high-risk decision states — especially the transition from thinking to action — so teams can reduce harm, improve safety, and avoid false confidence.

"If your system is interacting with real people in high-stakes moments, this is worth getting right."

The predictable failures of keyword-based risk detection

  • Missing high-risk users who are not expressing distress directly
  • Overreacting to low-risk users who use the right words
  • Providing responses misaligned with actual risk level
  • False confidence that the system is performing safely

High-risk psychological states
your system may be misreading

The critical transition from thinking to action — and the states that precede it — require clinical expertise that goes beyond language pattern recognition.

Decision-State Transitions

The shift from passive ideation to active planning is where risk escalates most rapidly — and where AI systems are most likely to miss it.

Collapse of Perceived Options

When a person's perceived solution space narrows to one, the language may remain calm even as risk is peaking.

Temporal Narrowing

Constriction of future-thinking is a clinical marker of acute risk that rarely surfaces in the vocabulary a system has been trained to detect.

Calm or Resolved States

A sudden sense of peace or resolution can mask elevated risk — and will likely be scored as low-risk by any keyword-based system.

Practical, safety-focused consulting
for AI teams building in high-stakes spaces

01

AI Risk Review

Identify where your system misses high-risk users.

  • Review of 25–50 interactions or scenarios
  • Identification of missed risk and false reassurance
  • Analysis of over/under escalation patterns
  • Clear, actionable recommendations
02

Failure Mode & Red Teaming

Find where your system breaks under real psychological conditions.

  • Scenario-based stress testing — high-risk, ambiguous, edge-case
  • Identification of vulnerabilities and misclassification patterns
  • Examples of high-risk misses with clear explanations
  • Targeted recommendations to improve safety
03

Safety Design & Advisory

Improve how your system handles risk from the ground up.

  • Response strategy and alignment
  • Escalation logic calibration — when to act, when not to
  • Reduction of false positives and false negatives
  • Design for real-world decision-state transitions
04

AI Harm & Expert Review

Analyze what went wrong — and why it matters.

  • Transcript and system behavior analysis
  • Identification of missed or misinterpreted risk
  • Opinion on foreseeability and failure points
  • Consultation and expert testimony as needed

Match your situation
to the right engagement

If you have a
New or early-stage system
→ Start with AI Risk Review
If you are
Preparing to launch or scale
→ Failure Mode & Red Teaming
If you are
Already live and improving
→ Safety Design & Advisory
If
Something went wrong
→ AI Harm & Expert Review

Not sure where to start? Your system likely has blind spots worth examining. Let's talk.

AI systems don't fail because
they lack information.

They fail because they misread what matters.

Request a Consult