ChatGPT can outperform university students at writing assignments, study finds
ChatGPT may match or even exceed the average grade of university students when answering assessment questions across a range of subjects including computer science, political studies, engineering, and psychology, reports a paper published in Scientific Reports. The research also found that almost three-quarters of students surveyed would use ChatGPT to help with their assignments, despite many educators considering its use to be plagiarism.
To investigate how ChatGPT performed when writing university assessments compared to students, Talal Rahwan and Yasir Zaki invited faculty members who taught 32 different courses at New York University Abu Dhabi (NYUAD) to provide three student submissions each for 10 assessment questions that they had set.
ChatGPT was then asked to produce three sets of answers to the ten questions, which were then assessed alongside student-written answers by three graders (who were unaware of the source of the answers). The ChatGPT-generated answers achieved a similar or higher average grade than students in nine of 32 courses.
Only mathematics and economics courses saw students consistently outperform ChatGPT. ChatGPT outperformed students most markedly in the “Introduction to Public Policy” course, where its average grade was 9.56 compared to 4.39 for students.
The authors also surveyed views on whether ChatGPT could be used to assist with university assignments among 1,601 individuals from Brazil, India, Japan, the US, and the UK (including at least 200 students and 100 educators from each country). Some 74% of students indicated that they would use ChatGPT in their work.
In contrast, in all countries, educators underestimated the proportion of students that plan to use ChatGPT and 70% of educators reported that they would treat its use as plagiarism.
Finally, the authors report that two tools for identifying AI-generated text—GPTZero and AI text classifier—misclassified the ChatGPT answers generated in this research as written by a human 32% and 49% of the time respectively.
Together, these findings offer insights that could inform policy for the use of AI tools within educational settings.