ChatGPT-generated answers went undetected in assessments at a UK university and scored more highly than real students, researchers have said. AI went undetected in 94 per cent of cases despite being used in the most “detectable way possible”, with AI-written answers typically scoring half a grade higher than those submitted by students. In the most extreme disparity, AI achieved almost a full grade higher than students. Consequently, “it seems very likely that our markers graded, and did not detect, answers that students had produced using AI in addition to our 100 per cent AI-generated answers”, the paper says. The way AI was used by researchers was perhaps atypical from the approach of students, who are more likely to edit answers written by ChatGPT before submitting.