A doctor-review copilot for a multi-specialty hospital.

Doctors were spending too much consultation time opening disconnected EMR, lab, radiology, pharmacy, appointment, and discharge-summary screens. Important context could be missed, and discharge/follow-up documentation took too long to prepare.
Build a doctor-assist copilot, not an autonomous diagnosis system: prepare patient context, surface safety checks, draft notes and instructions, cite sources, and require doctor approval before clinical content is finalized.
What shipped
Prepare the chart before the doctor opens it
Before the consultation starts, the copilot assembles a doctor-ready patient brief: history, current medicines, allergies, recent labs, imaging highlights, comorbidities, prior discharge notes, care gaps, and missing information.
Turn scattered reports into point-of-care signals
The copilot compares lab trends, checks medication context, flags care gaps, and retrieves approved hospital protocols. Doctors see what changed, what may be unsafe, and what still needs confirmation without reading every document manually.
Keep the doctor as the final authority
The system drafts summaries, follow-up plans, prescription instructions, and discharge content, but nothing becomes final until a doctor accepts, edits, or rejects it. Every source, AI output, and doctor action is logged.
What it looked like in action
Representative mockup using anonymized sample data. The interaction patterns reflect the production flows; names, amounts, IDs, and dates are illustrative.
Business outcomes
Technical capabilities demonstrated
The systems and controls behind the story above.
Retrieval-augmented clinical context
The copilot retrieves patient history, labs, medications, guideline snippets, and hospital protocols with source traceability before generating doctor-facing outputs.
OpenAI Agents SDK clinical workflow agent
The agent uses typed tools for patient summary, lab trends, safety checks, guideline retrieval, draft generation, approval capture, and audit logging.
Medication and care-gap safety checks
Rules and retrieval context flag allergies, duplicate therapy, kidney/liver cautions, monitoring needs, overdue tests, and missing chronic-care checks.
Human-in-the-loop finalization
No AI-generated diagnosis, prescription, discharge summary, or patient instruction is finalized without clinician review and approval.
Source-linked clinical outputs
Generated summaries cite the relevant patient record, lab, medication, report, or guideline source so doctors can inspect the evidence.
Specialty-specific rollout path
The pilot starts with internal medicine, diabetology, cardiology follow-up, and selected discharge workflows before expanding to higher-risk modules.
Architecture
Clinical context layer
Approved hospital systems feed normalized patient context from EMR notes, lab reports, radiology text, medication records, allergies, appointments, discharge summaries, and hospital protocols.
Doctor-assist copilot modules
Patient summary, lab trend review, medication safety, follow-up planning, patient instructions, guideline lookup, differential support, care-gap detection, and discharge drafting run as assistive modules.
Clinical approval and audit layer
Every output stays editable and source-linked. Doctors accept, edit, reject, regenerate, comment, and approve before anything is pushed back to the patient record.
Doctor remains decision-maker
AI prepares and flags; clinicians diagnose, prescribe, and approve final care content.
No autonomous prescribing
Prescription drafts require doctor-selected intent/templates and final doctor approval.
Traceable evidence
Every clinical output links back to the patient data, guideline, or protocol used to generate it.
Safety-first rollout
High-risk modules are introduced only after approval workflows, templates, and monitoring are validated.