Human-in-the-Loop (HITL)
A design pattern where a human reviews, corrects, or approves an AI system output before it is finalized or acted upon.
Full definition
Human-in-the-loop (HITL) is the architectural pattern that separates production AI from demos. Rather than letting an AI system act autonomously, HITL routes uncertain or high-stakes outputs to a human reviewer with full context attached. The human decision is logged and fed back as training or evaluation data, creating a continuous improvement loop. In high-stakes domains like healthcare, finance, and legal, HITL is not optional — it is the product. The right HITL design uses confidence-based routing: high-confidence outputs auto-execute, low-confidence outputs escalate.
Frequently asked
Why use human-in-the-loop?
Because AI systems make confident-sounding mistakes. HITL catches them before they reach the user.
Where should HITL sit in the pipeline?
As close to the action as possible: review the LLM output, not the input.