Patent-Pending Methodology
Metonym translates clinical judgment about suicide risk into a reproducible evaluation framework — usable across AI systems, vendors, and deployment contexts.
Why a Clinical Approach
Most AI safety evaluation for mental health applications relies on keyword detection, content moderation classifiers, or red-team prompts written by AI safety engineers. These methods catch a real and important subset of harmful outputs.
They are not, however, the same thing as clinical evaluation. A trained clinician evaluating a transcript does not ask "did the system block disallowed content?" — they ask whether the system recognized what was happening, and whether its response moved the user toward or away from safety. These are clinical questions, and they require clinical instruments.
Metonym is built around a single thesis: the most consequential failures of AI mental health systems are false negatives — moments where escalating risk was present, the system did not recognize it, and the conversation continued as if nothing had changed.
The Salient Distress Model is organized around clinically established categories of risk that surface-feature systems routinely miss:
Two Patent-Pending Frameworks
SDM
A clinical model of what makes a moment in a conversation clinically salient. SDM defines the categories of signal a competent evaluator should be looking for — including the subtler indicators of state transition that are easily missed by systems trained on surface features.
SDM is the answer to the question: what should an evaluator notice?
MSS
A structured scoring approach that operationalizes clinical judgment into a reproducible measurement. MSS lets multiple evaluators — and Metonym's internal scoring pipeline — produce comparable assessments across AI systems, model versions, and deployment configurations.
MSS is the answer to the question: how do we measure it the same way every time?
Specific calibration values, scoring weights, scenario contents, and rubric anchors are proprietary trade secrets and are not disclosed publicly. Metonym engagements operate under appropriate confidentiality terms.
Engagement
Metonym engagements are designed to produce something a clinical, technical, and regulatory audience can each trust: a structured assessment of how an AI system performs against a calibrated set of clinically meaningful scenarios.
A typical engagement includes scoping aligned to the client's deployment context, evaluation against the SDM/MSS framework, and a written report identifying performance gaps and clinical recommendations. The methodology has been applied to over 1,700 AI model evaluations across multiple scenarios to date.
Engagements are intended to complement, not replace, the client's internal safety review — and to provide the kind of independent clinical assessment that regulators and ethics boards increasingly expect.
Intellectual Property
The Metonym methodology — including the Salient Distress Model, the Mechanical Severity Score, and the underlying evaluation approach — is the subject of a US Provisional Patent Application filed in May 2026.
Public references to the methodology are limited to its named frameworks, conceptual structure, and high-level claims. The specific calibration, scoring formulas, and operational rubrics remain trade-secret protected and are disclosed only under appropriate confidentiality agreements during client engagements.