All case studies
Multi-specialty hospital

A doctor-review copilot for a multi-specialty hospital.

Doctor-facing clinical review copilot interface in a hospital consultation room
The challenge

Doctors were spending too much consultation time opening disconnected EMR, lab, radiology, pharmacy, appointment, and discharge-summary screens. Important context could be missed, and discharge/follow-up documentation took too long to prepare.

The Quellix mandate

Build a doctor-assist copilot, not an autonomous diagnosis system: prepare patient context, surface safety checks, draft notes and instructions, cite sources, and require doctor approval before clinical content is finalized.

What shipped

01

Prepare the chart before the doctor opens it

Before the consultation starts, the copilot assembles a doctor-ready patient brief: history, current medicines, allergies, recent labs, imaging highlights, comorbidities, prior discharge notes, care gaps, and missing information.

02

Turn scattered reports into point-of-care signals

The copilot compares lab trends, checks medication context, flags care gaps, and retrieves approved hospital protocols. Doctors see what changed, what may be unsafe, and what still needs confirmation without reading every document manually.

03

Keep the doctor as the final authority

The system drafts summaries, follow-up plans, prescription instructions, and discharge content, but nothing becomes final until a doctor accepts, edits, or rejects it. Every source, AI output, and doctor action is logged.

How the agents are wired together
EMRvisits · notesLabstrends · flagsPharmacymedicinesPatient contextnormalizedGuidelinesapproved sourcesDoctor copilotsummaries · draftsSafety checksallergy · gapsDoctorapprovesCLINICAL SYSTEMSCONTEXT LAYERAI ASSIST + SAFETY

What it looked like in action

Representative mockup using anonymized sample data. The interaction patterns reflect the production flows; names, amounts, IDs, and dates are illustrative.

Doctor brief · before consult
doctor approval required
62-year-old male · diabetes + hypertension
Follow-up after elevated HbA1c. CKD stage 2. No documented drug allergy.
HbA1c
8.6%
from 7.4%
eGFR
stable
monitor dose
Care gap
eye exam
not recorded
Sources: EMR · lab reports · medication record · guideline library
Draft follow-up plan
Suggested review
4 weeks with home BP readings.
Repeat tests
HbA1c in 3 months; urine albumin if not done.
Safety note: review dose suitability for renally cleared medicines before final prescription.
AcceptEditReject

Business outcomes

50-70%
Target reduction in patient-history review time for supported follow-up workflows.
Safer drafts
Medication cautions, care gaps, and missing context are visible before final approval.
900 hrs/mo
Illustrative monthly time recovery across OPD review and discharge drafting at pilot scale.

Technical capabilities demonstrated

The systems and controls behind the story above.

Retrieval-augmented clinical context

The copilot retrieves patient history, labs, medications, guideline snippets, and hospital protocols with source traceability before generating doctor-facing outputs.

OpenAI Agents SDK clinical workflow agent

The agent uses typed tools for patient summary, lab trends, safety checks, guideline retrieval, draft generation, approval capture, and audit logging.

Medication and care-gap safety checks

Rules and retrieval context flag allergies, duplicate therapy, kidney/liver cautions, monitoring needs, overdue tests, and missing chronic-care checks.

Human-in-the-loop finalization

No AI-generated diagnosis, prescription, discharge summary, or patient instruction is finalized without clinician review and approval.

Source-linked clinical outputs

Generated summaries cite the relevant patient record, lab, medication, report, or guideline source so doctors can inspect the evidence.

Specialty-specific rollout path

The pilot starts with internal medicine, diabetology, cardiology follow-up, and selected discharge workflows before expanding to higher-risk modules.

Architecture

Clinical context layer

Approved hospital systems feed normalized patient context from EMR notes, lab reports, radiology text, medication records, allergies, appointments, discharge summaries, and hospital protocols.

Doctor-assist copilot modules

Patient summary, lab trend review, medication safety, follow-up planning, patient instructions, guideline lookup, differential support, care-gap detection, and discharge drafting run as assistive modules.

Clinical approval and audit layer

Every output stays editable and source-linked. Doctors accept, edit, reject, regenerate, comment, and approve before anything is pushed back to the patient record.

Orchestration flow
Clinical systems
Patient context
Doctor copilot
Clinician approval
Security · compliance · governance

Doctor remains decision-maker

AI prepares and flags; clinicians diagnose, prescribe, and approve final care content.

No autonomous prescribing

Prescription drafts require doctor-selected intent/templates and final doctor approval.

Traceable evidence

Every clinical output links back to the patient data, guideline, or protocol used to generate it.

Safety-first rollout

High-risk modules are introduced only after approval workflows, templates, and monitoring are validated.

More case studies