When AI is suspected, academic “student marks”

Applications of AI


Some scholars have marked down students who appear to have used artificial intelligence (AI), even if the assessment allows.

New paper published in Research on higher education The use of AI in evaluations claims to have created “a messy grading space filled with tension and contradictions.”

The researchers interviewed 33 academics from China's Greater Bay Area. Some academic universities did not have specific AI policies, but others did.

They were asked when they suspected of using AI or how they treated the location where students declared their use of technology.

Overall, this study found that scholars were influenced by their markings by their perceptions of AI use, and that such techniques have “complex values ​​that have long been celebrated in student work, such as originality and independence.”

One academic said, “I think this is dishonest and talks a lot about the integrity of students. If you think that students can use AI to cheat and get away as a teacher, I need to do my job.”

Another said:If the two assignments show the same quality, but B can complete it independently without AI, then wouldn't this show B be more capable and deserve a higher grade? ”

When it was pointed out that as long as student AI is declared, student use has led scholars to change their minds and emphasize the “stress” of “legitimateness” and “traditional emphasis on independence as a marker of intellectual ability” of using AI.

Humanities instructors are likely to be critical of the use of AI, and as a result, students are likely to be punished for docking marks, reflecting broader concerns in these areas that AI is a “shortcut that undermines important processes towards learning.”

“Assistant Professor Jiahui Luo at Hong Kong University of Education and Phillip Dawson, co-director of Deakin University's Centre for Assessment and Digital Learning, report that academic expectations for AI are often implicit and not publicly communicated to students.”

They said Higher education in the times: “”Currently, most assignments are categorized as “central”, but the use of AI is not explicitly prohibited and is not required, but students are expected to declare their use of AI.

“This creates variation in the way students approach assignments. Some report emphasis on AI, while others do not at all, but it is unclear how these different uses of AI are interpreted by teachers and subsequently considered in grading.”

“This will likely lead to a decline in the reliability of academic accreditation and lead to distrust and unfairness from students,” Luo said.

The paper argues that the “validity” of markings can “provide an forward path.” This gives us a clear understanding and expectation from both staff and students that “specific tasks are intended to be evaluated.”

“Through the view of validity, the use of genai could be a justification to mark down a student's work if the student fails to demonstrate that he or she has met the results being assessed (and only in the sole case).”

For example, in this model, it is fair to mark down students studying languages ​​using AI in their work, as “the use of taste was hindering the ability of students to showcase their writing skills.”

It is “important” for scholars to provide students with a “explicit” declaration on how Genai in a task affects grading, and the paper recommends that universities organize workshops to ensure that lecturers align their gradational practices with educational goals.

juliette.rowsell@timeshighereducation.com



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *