Machine learning technology is changing how institutions mean student feedback

Machine Learning


Institutions spend a lot of time on students for feedback on their learning experiences, but once they calculate numbers, the difficult bits solve the “why”.

What qualitative intelligence agencies gather is a gold mine of insights into emotions and specific experiences that drive the headline feedback number. If students are particularly positive, it helps to spread their good practice and see why they apply it in different learning contexts. When students acquire some aspects of their experience negatively, it is important to know the exact nature of perceived gaps, omissions, or injustice so that they can correct them.

Although conscientious module leaders will run student comments on their gaze in module feedback surveys, qualitative data scales are overwhelmed by the naked eye when monitoring modules to look at program or cohort levels, or considering large-scale studies such as NSS, PRE, or PTE. Even the most conscientious readers will notice biases appearing as interesting or unexpected comments tend to foreground when they have greater explanatory power to factory operators.

Traditional coding methods of qualitative data require someone (or ideally more than one person) to manually split the comments into clauses or statements that can code themes and emotions. It's robust, but incredibly troublesome. Institutions are open to students research work whose goal is to respond to feedback and make improvements at PACE, where robust analysis of this type is rare, if not standard practice. Spending time on this type of detailed methodological work is not a priority, especially as resources become more constrained.

Let your mind blow away

That's where machine learning technology can really change the game. Student Voice AI was founded by Stuart Gray, a scholar at Strathclyde University (now working at the University of Glasgow). Working with Advance, he was able to train machine learning models on PTE and Pres datasets across the country. Now, further training the algorithms on NSS data, student voice AI provides a literal same-day analysis of student comments on NSS results from subscribers.

Put the words “AI” and “student feedback” in the same sentence. Some people's hackles rise quickly. So Stuart spends quite a bit of time explaining how analysis works. The word he uses to describe a version of voice AI deployment for machine learning students is “monitored learning.” Humans are to manually label categories in the dataset and “teach” the machine about sentiment and topics. The larger the available dataset, the more sophisticated the machines are exposed. Through this process, student audio AI reached a careful number of comment themes and categories for students taught to students. The majority of student comments are consistently trained and uniquely fall into UK higher education student data. Stuart adds that it can be done when categories can evolve.

“Categories are based not on what students think they are talking about, but on what students are saying and what we want to talk about. There may be more categories if we want, but that's about what is easier for a normal person to digest.”

In practice, this means that institutions can see quantitative representations of student comments sorted by category and emotion. For example, look at the views of feedback students, divide it into positive, neutral, negative emotional balance, overall, departmental, subject areas, or long-standing research, then click to see relevant comments to see what drives that feedback. This is very different from asking for example to dump student comments to a third-party generated AI platform (sharing sensitive data with third parties while you're on it). The time and effort saved is valuable, but there is also the possibility of eliminating individual personal biases and aggregating and segmenting the various stakeholders within the system. It will also allow students to compare qualitative feedback across institutions.

Currently, Student Voice AI is partnering with Student Insight Platform Evasys to provide machine learning technology to qualitative data collected through the Evasys platform. And Evasys and Student Voice AI are commissioned by Advance to code and analyze open comments from the 2025 Pres and PTES survey.

Evasys Managing Director Bruce Johnson is passionate about the possibility that technology can drive cultural change, both in the way students use feedback to inform institution-wide insights and actions.

“When thinking about how to create actionable insights from research data, the key question is who is the module leader? Are you a program director for the module collection? Is it directed to the department head or professional deputy prime minister or planning team? In a visually appealing way.”

“It's clear that coming from higher education, different stakeholders have very different uses for student feedback data,” says Stuart Gray. “Education at Callface is concerned with student engagement. At the strategic level, there are various stakeholder groups in professional services that are interested in trends and sentiment analysis, and although they cannot see this properly, they can generate reports that show what students are saying about their area.

The results are as follows:

Duncan Berryman, Head of Student Research at Queens University Belfast, summarises the value of AI analysis for his small team. Previously, schools provided Excel spreadsheets. His team spent a lot of time explaining and working with colleagues how to understand the data in these spreadsheets. Being able to see a simple visualization of student sentiment on various themes means, as Duncan rather struggles, “it's not just because you don't know what the student survey is saying if there's no change going on.”

Parama Chaudhry, professor of Economics and Providence Provost Education (Student Academic Experience) at University College London, describes where qualitative data analysis is available in the wider ecosystem to improve the quality of teaching and learning. In her view, it is not particularly useful to compare quantitative student feedback scores with scores from another department for reinforcement purposes. Essentially, it compares apples and oranges. However, compared to the overwhelming sense of student comments and complexity, the obvious ease of comparability of quantitative data means that they spend time trying to explain numerical differences rather than mining qualitative data for a more robust and viable explanation that can give context to their scores.

In other words, it's not that people aren't working hard on enhancing, but they didn't always have the best information to guide the job. “When I got caught up in this role, quite a few people were saying, 'I don't understand why qualitative data conveys this, we did all this,” says Parama. “I've been in this sector for a long time, receiving a share of a summary of module assessments and have always questioned those summaries as they are someone else's 'reading'. Having that truly objective view from a well-trained algorithm makes a difference. ”

UCL will test a two-page summary of student comments in a specific department for this academic year and will roll out versions of all departments this summer. Data is not evaluated in a vacuum. This forms part of a broader institutional quality assurance and reinforcement process, including data on the scope of different perspectives on the field of development. So far, the data from students is consistent with what emerged from internal reviews, giving departments the opportunity to gain greater confidence in their processes and action plans.

This does not stop anyone from looking at a particular student's comments, detecting analysis of the algorithm, or triangulating other data. Marianne Brown, head of academic planning at the University of Edinburgh, says the value of AI analysis lies in the speed of turnaround. However, being able to share headline insights at pace (in this case via the PowerBI interface) means that while the leader receives feedback, information is still fresh and the lead times that will make a difference will be longer than if manual coding time was lost.

The University of Edinburgh is known for its cutting-edge AI research, and the Edinburgh (Access to Access) Language Model (ELM) boasts a platform that allows staff and students to access generative AI tools without sharing data with third parties, and to retain and protect all user data. It is clear that even closed systems like ELM are not suitable for free student comment analysis. Generation AI platforms provide the illusion of theme analysis, but are far from robust because generative AI works through sophisticated inference rather than analyzing the meaning of actual data. “Being able to put responses from the NSS or our internal student survey into ELM to give a summary was a great help to begin questioning these summaries. We still need a robust verification of the output,” says Marianne. Similarly, Duncan Berryman said: “When I ask for Gen-AI tools and view comments related to the selected theme, they don't look at the actual comments.

The Holy Grail of Student Research Practice creates a noble circle. Student involvement in feedback creates actionable data, leads to enhanced education, and gives us confidence that the process is authentic and that the motivation to share feedback is further motivated. In that quest, a well-deployed AI will become an organization's allies and resource marchers, providing quick and robust access to aggregated student opinions and opinions. “The end result is improving teaching and learning,” says Stuart Gray. “And hopefully what we're doing is saving time with the boring parts of the manual and freeing up time to make a real change.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *