On Thursday, OpenAI researchers unveiled CriticGPT, a brand new AI mannequin designed to establish errors in code generated by ChatGPT. It goals to boost the method of constructing AI programs behave in methods people need (referred to as “alignment”) by Reinforcement Studying from Human Suggestions (RLHF), which helps human reviewers make giant language mannequin (LLM) outputs extra correct.
As outlined in a brand new analysis paper referred to as “LLM Critics Assist Catch LLM Bugs,” OpenAI created CriticGPT to behave as an AI assistant to human trainers who evaluation programming code generated by the ChatGPT AI assistant. CriticGPT—based mostly on the GPT-4 household of LLMS—analyzes the code and factors out potential errors, making it simpler for people to identify errors that may in any other case go unnoticed. The researchers skilled CriticGPT on a dataset of code samples with deliberately inserted bugs, instructing it to acknowledge and flag numerous coding errors.
The event of CriticGPT concerned coaching the mannequin on a lot of inputs containing intentionally inserted errors. Human trainers had been requested to switch code written by ChatGPT, introducing errors after which offering instance suggestions as if that they had found these bugs. This course of allowed the mannequin to discover ways to establish and critique numerous forms of coding errors.
In experiments, CriticGPT demonstrated its skill to catch each inserted bugs and naturally occurring errors in ChatGPT’s output. The brand new mannequin’s critiques had been most well-liked by trainers over these generated by ChatGPT itself in 63 p.c of instances involving pure bugs (the aforementioned statistic). This desire was partly attributable to CriticGPT producing fewer unhelpful “nitpicks” and producing fewer false positives, or hallucinated issues.
The researchers additionally created a brand new method they name Pressure Sampling Beam Search (FSBS). This methodology helps CriticGPT write extra detailed critiques of code. It lets the researchers regulate how thorough CriticGPT is in on the lookout for issues whereas additionally controlling how usually it would make up points that do not actually exist. They will tweak this steadiness relying on what they want for various AI coaching duties.
Curiously, the researchers discovered that CriticGPT’s capabilities prolong past simply code evaluation. Of their experiments, they utilized the mannequin to a subset of ChatGPT coaching information that had beforehand been rated as flawless by human annotators. Surprisingly, CriticGPT recognized errors in 24 p.c of those instances—errors that had been subsequently confirmed by human reviewers. OpenAI thinks this demonstrates the mannequin’s potential to generalize to non-code duties and highlights its skill to catch refined errors that even cautious human analysis may miss.
Regardless of its promising outcomes, like all AI fashions, CriticGPT has limitations. The mannequin was skilled on comparatively brief ChatGPT solutions, which can not absolutely put together it for evaluating longer, extra advanced duties that future AI programs may deal with. Moreover, whereas CriticGPT reduces confabulations, it would not eradicate them completely, and human trainers can nonetheless make labeling errors based mostly on these false outputs.
The analysis workforce acknowledges that CriticGPT is best at figuring out errors that may be pinpointed in a single particular location inside the code. Nonetheless, real-world errors in AI outputs can usually be unfold throughout a number of components of a solution, presenting a problem for future mannequin iterations.
OpenAI plans to combine CriticGPT-like fashions into its RLHF labeling pipeline, offering its trainers with AI help. For OpenAI, it is a step towards creating higher instruments for evaluating outputs from LLM programs which may be tough for people to price with out further assist. Nonetheless, the researchers warning that even with instruments like CriticGPT, extraordinarily advanced duties or responses should still show difficult for human evaluators—even these assisted by AI.