Universities risk ceding control to Big Tech’s AI push

AI For Business


In their rush to adopt AI, universities risk surrendering their intellectual autonomy to Silicon Valley influence, a professor says.

Bruna Damiana Heinsfeld, an assistant professor of learning technologies at the University of Minnesota, said in an essay for the Civics of Technology Project, an educational platform that analyzes the social impact of technology, that universities are allowing Big Tech to reshape what matters as knowledge, truth, and academic value.

From multimillion-dollar partnerships with AI vendors to classrooms that incorporate corporate branding, he said universities are moving toward a model that bundles technology tools with the corporate identity behind them.

As academic leaders scramble to appear “AI-enabled,” Heinsfeld warned that the field is drifting from critical scrutiny to compliance, putting at risk a future in which Silicon Valley, not educators, dictates the terms of learning.

AI is not just a tool, she warns, it's a worldview.

Heinsfeld said AI tools promote a worldview in which efficiency is considered a virtue, scale is inherently desirable, and data becomes the default language of truth.

Universities that implement these systems without high-stakes testing risk teaching students that Big Tech's logic is not just helpful, but inevitable, he added.

Heinsfeld cited California State University as an example of this change.

The university signed a $16.9 million deal in February to roll out ChatGPT Edu to 23 campuses, giving more than 460,000 students and 63,000 faculty and staff access to the tool through mid-2026.

When students arrived at an AWS-powered “AI camp” in the summer, they found Amazon branding everywhere, including company slogans, AWS notebooks, and promotional items.

Risk extends beyond institutional strategies


kimberly hardcastle

Kimberly Hardcastle said generative AI is quietly moving knowledge and critical thinking from humans to Big Tech's algorithms.

Provided by Kimberly Hardcastle



Another scholar said that this problem is already visible in the mechanisms of everyday learning.

Kimberley Hardcastle, professor of business and marketing at Northumbria University in the UK, told Business Insider that universities need to overhaul the way they design assessments now that students' “epistemological mediators” – the tools that help them make sense of the world – have fundamentally changed.

Hardcastle supports asking students to prove their inferences, including how they reached their conclusions, what sources they consulted beyond the AI, and how they verified the information against primary evidence, she said.

She said students also need intentional “epistemological checkpoints.” This is a moment built into the coursework that forces you to stop and ask yourself: “Are you using this tool to augment or replace your thinking? Are you working on the underlying concepts, or are you just getting a general idea of ​​AI? Do you understand it, or are you just recalling information?”

The real danger is ceding the power to define truth.

The risk for Heinsfeld is that companies decide what constitutes legitimate knowledge. For Hardcastle, it means that students will not be able to figure out how to evaluate truth for themselves.

They argue that universities should continue to be places that teach students not just how to use tools, but also how to think.

Heinsfeld writes, “Education should continue to be a forum for confronting the architecture of tools.” Otherwise, “we risk becoming a laboratory for the very system we are supposed to criticize.”

Hardcastle made a similar point, adding that this future will be shaped not just by the decisions of institutions, but by every moment when students accept AI-generated answers without knowing how to question them.





Source link