University examiners at the University of Reading failed to detect AI-generated exam answers in a real-world blind test. Only one out of 33 entries was flagged by markers who were unaware of the project. The study raises questions about the ability to distinguish between student and AI submissions.
CheatGPT! Examiners struggle to tell the difference between answers written by AI and those from real human students - so, can you tell which of these papers was written by a bot? https://t.co/mwFYvDqX3x https://t.co/x6DdFGCPJ4
Struggling to put the perfect ending to your essay? Want to use ChatGPT to get this done for you? However, you are afraid that AI checkers like Turnitin & GPTZero will catch you. No worries! I've got an undetectable AI hack for you! Click https://t.co/Lebur97CCV to… https://t.co/M3Jmfs5b3S
University examiners fail to spot AI-generated answers in real-world test - with copy from @PA students https://t.co/Ui2PWigtO5
Can we tell when a student submission is AI? This study from the University of Reading suggests not. "The university’s markers – who were not told about the project – flagged only one of the 33 entries." https://t.co/dTuSCTJnHF
AI generated exam answers go undetected in real-world blind test @unirdg_news @PLOSONE https://t.co/wHCt1HwEGO