Evidence first
Every AI finding cites the chart text it found. The clinician sees what the AI saw. There's no opaque "score increased."
Bidirectional flags
Coding Review surfaces downcoding gaps as well as upcoding risks. We're checking accuracy, not optimizing for revenue.
Clinician sign-off
AI never auto-applies coding changes. The clinician (or coder) reviews each finding, accepts or rejects, and signs.
Audit trail of AI use
Every AI run is logged with the prompt category, input fingerprint, and which findings were accepted. Reviewable per OASIS.
Per-tenant prompts
Enterprise tenants can customize prompts to match their coding manual. We never tune the prompt to "lift case mix."
Graceful degradation
Every AI feature falls back to rule-based logic when AI is unconfigured. A new agency that hasn't set up an AI key still gets the workflow.