AI may improve childhood creativity ratings

AI News


new research Researchers at the University of Georgia aim to improve how they assess children’s creativity. Through human evaluation and artificial intelligence.

A team at Mary Francis College for Early Education is developing an AI system that can more accurately assess open-ended responses to elementary school creativity assessments.

Denis Dumas, Associate Professor, Department of Educational Psychology

“Just like hospital systems need good data about their patients, educational systems need really good data about their students to make effective choices,” said study author and Associate in Educational Psychology. Professor Dennis Dumas said. “Creativity assessment has policy and curriculum relevance, and without assessment data, we cannot fully support creativity in schools.”

These tests are commonly used by schools to identify gifted students who need additional instructional resources in order to adequately provide them. And because they are slow to assess and most open-ended questions require scoring by multiple trained human judges, they are not as widely used as answers like math, reading, and IQ. But creating an AI system could make creativity assessment a more accessible tool for schools.

To improve the AI’s capabilities, Dumas and his collaborators analyzed over 10,000 individual responses in a 30-minute creativity assessment. They found that some categories of students and some types of responses led to less consistency in creativity ratings among judges. All of the student’s personal information was removed from the assessment and the judges only received the student’s responses.

“Our judges didn’t know who the children were or their specific demographics,” Dumas said. “While there was no obvious bias, there was something about the way some students responded that made it difficult for our team to reliably assess their responses.”

Judges were instructed to score responses on a scale of 1 (least original) to 5 (most original), with less original responses or responses from younger children or male students , more likely to disagree with the assessment.

“I would have expected more disagreement among the raters at the top of the originality scale, but the judges are looking for originality, so the response is unusual and surprising. We found that people were more likely to agree if they were smart and smart,” Dumas said. “But when the answer comes, [scored] Further disagreements arose due to the low measure of originality. “

For example, a third-year student who was asked how to use a hat in an unexpected way suggested, “If you cut off the shade part, it will look stupid.” Ratings for this response ranged from 1 to 4, and the study highlighted an example of how difficult it is to assess the responses of young students. Some judges considered this unoriginal because the hat is still a wearable item on the head. Others, however, thought the change in hat appearance was funny, surprising, and age-appropriate for creative third graders.

We also saw a wider range of scores for highly creative responses from gifted students, with Latinos being perceived as learners of English and Asian students spending more time on assignments. All these factors further increased the rating discrepancy.

“Bilingual children will write different answers. Their responses will be formulated differently than monolingual children,” Dumas said. “Many of our readers were also bilingual, which can be difficult to apply in the context of assessment. is difficult.”

By understanding where the rating discrepancies occurred, the AI ​​system can be retrained to improve accuracy and reduce the margin of error in the rating results, Dumas said. Dumas said these error bars are standard for assessments commonly used in schools, but they may be broader for creativity assessments than, say, math or reading tests. The narrower the band, the more confident schools will be in making decisions based on scores.

Dumas said the study is a step toward improving the accuracy and thus reliability of these assessments.

“What schools value tends to be what teachers value in their teaching. So the values ​​and priorities of the school system are reflected in the assessments schools choose,” Dumas said. “We hope to further incorporate creativity assessments into the school psychologist’s toolkit so that we can observe young children’s creative potential and provide options for interpreting it as strengths.”

The project was funded by the US Department of Education and involved collaborators from the University of Denver and the University of North Texas. Many of the authors of this study were current graduate students who worked on this project.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *