Israeli Airstrikes Intensify in Gaza and Lebanon, 87 Palestinians Killed in 24 Hours
Israeli Airstrikes in Northern Gaza Result in High Casualties in Residential Areas
U.S. Pushes for Ceasefire Using 2006 U.N. Resolution as Framework
Netanyahu Aide Indicted for Leaking Sensitive Military Documents Amid Wartime Crisis
Putin issues warning to the West as Russia launches fresh missile strikes on Ukraine.
A recent study revealed that exams written by AI were more accurate than those written by university students, and occasionally, the markers could not identify that an AI took the exams. Academics at the University of Reading generated 33 fake learners and used ChatGPT to provide answers to end-of-course exams for an undergraduate psychology degree programme.
AI answers produced better grades than the actual students, with an average of half a grade boundary higher. Surprisingly, 94% of these AI essays did not pose any issues to the markers, which means that AI-generated work can be barely noticeable.
“This is rather worrying because AI submissions confidently secured better grades than student submissions. ” The study is available in the PLOS One Journal. Thus, for some carrying activities, students could cheat undetected with the help of artificial intelligence, allowing them to get a higher grade than they would have if they had not cheated.
As pointed out by Associate Prof. Peter Scarfe and Prof. Etienne Roesch, who authored the study, the findings have profound implications for education facilities in the global case. Notably, a ‘demythologisation of the examination’ trend assumes that assessment should be as inclusive as possible, as many institutions have already moved away from traditional multiple-choice exams. Following our work’s theme, we found that AI’s role in dishonest educational assessments is of international relevance. We will not go back to the handwritten exams – but the global education sector will need
The procedure in the study entailed writing fake exam answers and essays for first, second, and third-year modules and submitting them to the markers unknown to the subjects. The findings suggested that AI excelled the honest undergraduates in the first two years. Nevertheless, there were words of caution, especially on the fact that, in third-year examinations, real students performed better, leaving AI behind, a problem researchers said was due to current AI’s weakness in complex and abstract thinking.
The masked study the paper reports is the most extensive and methodologically sound study of individual differences in grapheme-colour synesthesia of this type to date. It is critical since it leads to various concerns related to the educational process and the role of artificial intelligence within it. The pioneers in the subject have expressed concern with the probability of the emergence of AI threatening faculty credibility. For instance, Glasgow University has recently resumed in-person exams for one of the courses due to these issues.
In January of this year, a Guardian article revealed that most undergraduates relied on AI programs to write their assignments. Still, only 5% of them admitted to copying and pasting raw AI text into their papers.
Based on their study, the researchers said that their result should be a wake-up call to educators worldwide. With the development of technologies and AI in particular, schools need to reconsider how they assess students and prevent cheating.
Overall, it was found that AI can be helpful in education; however, this research demonstrated the rising problem of AI-generated content and the necessity for the worldwide sphere of education to cooperate and brainstorm the utilisation of approaches in training. The assessments’ credibility should not be compromised for the favourable student performance measurement.