Fe News | Does AI assessment benefit English learners?

AI News


Despite the big topics it presents, when it comes to English assessments, Dr. Evelina Galaczi, Cambridge University Press and Evaluation's English, says that humans must stay in the driver's seat to balance innovation and quality of preservation..

A recent YouGov poll found that 39% of people may not rate the relevant language skills of AI-based tests, potentially disadvantaged for those taking work, living and studying exams in the UK. They also found that only a quarter of respondents (26%) expressed anxiety in assessing their competence with a limited prospect of human interaction.

For me, the poll highlights two important points. One – English learning is fundamentally a human skill, so we must think critically about the role that AI should play in assessing this ability. and 2- if the required skills are not properly evaluated in the AI-Only approach, they put misinformation about an individual's abilities at risk.

Misplaced fear?

This does not mean that AI should not play its role. We need to understand what it can do and, more importantly, what we can't do now. Professor Rose Luckin of UCL mentions this in her work as “preparing for AI.”

The general fear around AI is often found to be a bit misguided, as it is not as simple as either good or bad. Just because some students and candidates can use AI to cheat does not mean that AI will increase instances of fraud. To be ready for AI, it is important that language assessment experts focus on how they integrate it into assessments, not on how to ban AI. This creates an increasingly important role for humans using AI.

The evolutionary role of humans

Humans don't immediately replace AI in the field of evaluation, but their roles change. People need to start with test design and stay in the driver's seat at every stage of the evaluation. So, in the future, if tests are designed, humans will embed AI in the test development process.

When you look at the test stream, you will see a similar image. You cannot delegate everything to a machine. In some contexts, a human examiner or teacher must intervene to provide a speaking test. The same can be said for the test security mark and supervision.

And of course, if you dig deeper and look at communication more generally, there are many human aspects of communication that will not disappear due to the emergence of AI. Last week I tried to have a conversation using translation technology. I spoke my native language, Bulgarian, while others spoke English. It was an impressive technique, but communication is more than a simple transaction of words. There are plenty of additional aspects. For example, something seemingly insignificant, like a small gesture or nod that indicates you are following a conversation, makes such a difference.

What can AI do with English evaluations?

In the classroom, AI can provide 24/7 opportunities to practice language and get immediate feedback. It is something that teachers cannot compete and frees up time to focus on the more social and emotional aspects of learning. The way AI generates data quickly takes a bit of how long an app runs and how quickly I can tell you. We can do the same for learning, and AI can provide insight into individual performance. This is truly useful data for teachers, parents and policy makers.

Focusing attention on assessments can also have a major impact on humans and AI in helping to ensure that testing is comprehensive and fair. While humans need to understand the nuances of inclusion and accessibility, AI can be a great help in adapting tests to the needs of test takers. For example, you can deliver listening tests at the test taker's pace by pausing and changing the speed as needed. So AI can help you do more from an accessibility and fairness perspective than before.

How difficult is AI having in terms of evaluation?

But of course there is a challenge! I'm often asked: What specific language skills does AI-based rating have a hard time evaluating? The short answer is: All of them in some way! For example, listening and reading skills are about understanding. What does the test taker understand by reading the test? What did they understand from listening to someone? One of the challenges AI faces is that it requires you to have well-tuned content to accurately measure your reading and listening abilities. You will see that AI may give you reading text and some understanding questions, but it does not consistently do it well.

I'll talk about this too. For example, a conversation (or discussion!) with Siri, ChatGpt, or Alexa (or discussion!). It's impressive, but AI doesn't understand anything. It's just an interface with the data behind it. It's not intelligence or knowledge! Even if they seem to understand, AI cannot adapt speech well, especially when it comes to the different levels of competency between learners and testers.

What are the major risks and concerns about AI?

When developing a strategic approach to the use of AI in education, the final line is to ask questions: What value does AI add? The biggest risk is blind use of AI without a complete understanding of the purpose and impact it has on learners and candidates. In other words, AI needs to avoid lowering evaluation criteria, but it complements what humans can do.

A major topic of discussion is ethical considerations regarding the use of AI. Recognizing risks is extremely important for everyone involved in assessing governments to teachers and individual users. You need to be keenly aware of issues relating to inaccurate content, bias, copyright and test integrity. And when assessing ethics regarding its use, we should not ignore the substantial environmental impact of AI. Finally, in the evaluation field, it is particularly important that AI uses models are trained with the right data, evaluated appropriately, and have a very clear purpose for their use.

What's next for AI and ratings?

I believe that a lot of immediate focus is on understanding how we can improve current innovations in AI and language education and assessment. Turning to Cambridge, the English group I work for is very focused on the role of AI in learning and assessment of English, finding the best way to help them to continue talking with teachers and other stakeholders. Specifically, in the field of evaluation, we already offer popular English tests equipped with AI and computer adaptation techniques, providing quick and accurate tests of English proficiency. For example, the marking of this test uses a hybrid approach in which both examiners and AI play a role. This allows you to optimize your own strengths.

Going forward, we must look critically at how AI can significantly improve test development, delivery, marking, test adaptability and test security for various types of test takers. But perhaps most importantly, we need to continue to create an environment where AI can be used meaningfully to provide fair and accurate assessments, and it's not just a shiny new gimmick. That's what really excites me!

Dr. Everina Garasi, Director of English Studies at Cambridge University Press & Evaluation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *