Human in the loop

Machine learning models can sometimes be unreliable. A human in the loop system is one in which machine learning inferences are assessed continually by a human operator. For example, GitHub's Copilot system has a human in the loop - in that the responses from the code model are accepted or rejected by the human writing the code.

Image credits: Anderson Anthony

In essence, using a human in the loop approach is valuable in situations where human judgement, expertise, or oversight is essential to enhance the performance, safety, ethics, and overall reliability of AI systems. For example, it is often used for quality assurance, in safety-critical systems, for legal and ethical compliance, in content moderation, anomaly detection and in error recovery. It therefore ensures AI technologies are deployed responsibly and effectively across a wide range of applications and domains.

Related Articles

No items found.