Why one AI administrator is skeptical about AI

Machine Learning


Three years after generative artificial intelligence technology became mainstream, predictions about how AI will transform the workforce and learning continue unabated.

Anthropic CEO Dario Amodei predicted last year that AI could eliminate half of entry-level white-collar jobs within just five years. More recently, Microsoft’s AI CEO Mustafa Suleiman predicted an even bleaker outlook. Most white-collar jobs “will be fully automated by AI within the next 12 to 18 months.”

At the same time, universities across the country are rushing to prepare students for a workforce that will increasingly rely on AI. Many universities, including Ohio State University, the California State University System, and Columbia University, are trying to accomplish that in part by partnering with big tech companies like Google and OpenAI, which say their products can also enhance learning and instruction.

A portrait of a light-skinned man wearing a blue button-down shirt under an olive-tan jacket.

But Matthew Connelly, a professor of history and associate dean for AI initiatives at Columbia University, is skeptical that the higher education sector will rush to partner with tech companies without much evidence that AI tools improve learning outcomes. Rather, he believes such partnerships provide a training ground for technology companies to develop the very AI systems that replace human workers, usurping the knowledge creation business that higher education has long dominated.

“Young people are rapidly becoming dependent on AI and losing the ability to think for themselves,” Connelly wrote in a guest essay for US magazine last week. new york times The title is “AI companies are preying on higher education.” “And rather than rallying resistance, academic administrators are aiding and abetting a hostile takeover of higher education.”

Inside higher education We interviewed Connery about what’s causing his skepticism.

(This interview has been edited for length and clarity. )

Q: What does your work as associate dean for AI initiatives involve?

answer: A lot of things. As an example, we are running what is probably the world’s largest randomized controlled trial of undergraduate writing courses, exploring how we can encourage ethical and effective use of AI, and how we can prevent misuse that undermines learning. We also do a lot of work on curriculum review, including working with departments to see if changes need to be made to how courses are taught.

Q: Before taking on the role, how were you using AI-powered technology in your work as a historian?

answer: For the past 15 years, I have been working with data scientists, computer scientists, and statisticians to use machine learning and natural language processing to explore history in new and increasingly necessary ways. Historians are overwhelmed with data, and the way we were trained to work, going into archives and examining paper files, is disappearing. I am working with colleagues to devise new approaches that leverage the incredible strengths of artificial intelligence to do better and more rigorous historical research.

But while there’s a lot of talk about what AI can do, when you actually sit down and do the research; [on those claims]we see that many of the possibilities of AI are just possibilities. There’s a huge gap between what people say is possible and what proves to be doable when you actually try to do it in a controlled environment.

Q: Why are you skeptical of the technology industry’s claims that AI-powered tools have the power to transform teaching and learning?

answer: We’ve been here before. For example, many of us remember the story about how open online courses will put some versions of higher education out of business. But then the coronavirus pandemic forced us all to experience open online courses, and many found it to be a far inferior experience to in-person instruction.

For decades, educational technology advocates have claimed that their products will solve all of higher education’s problems, make everything cheaper, and lead to dramatic improvements. But what research has found time and time again is that these tools often have negative effects. In some cases, there may be some positive impact, but this must be weighed against the costs and other unintended consequences.

Why should we believe that AI-powered tools will change anything when the proponents of educational technology have been proven wrong time and time again? The onus should be on them to show us the rigorous research that demonstrates the real improvements that result from AI implementation.

Q: How have you seen the widespread adoption of AI tools both support and weaken student critical thinking?

answer: Yes, AI can support more rigorous learning, but only in the way that students collaborate with professors to test what is possible. It requires a lot of trial and error. Rather, AI is useless when large numbers of students are using it without any instruction, testing, or research to show what kind of work is useful at scale.

It’s always been difficult to get students to get something deeper out of their learning experiences than just a piece of paper at the end, but it’s even harder when technology suddenly allows them to get the grades they want without having to do anything. We’re trying to train the scientists and engineers of the future, and these AI companies are making that increasingly difficult. It’s like eating seeds.

Q: Why do you think so many higher education institutions are eager to partner with technology companies to implement AI tools, despite the limited evidence supporting their effectiveness?

answer: Many educational institutions believe that if they don’t make AI available to their students, they won’t be ready for the workplace. It’s a scary place. You don’t want to say you’re not the only one not using AI, because you don’t know if AI will actually help you learn when all your competitors are implementing it at scale.

question: What threats do you think these AI partnerships pose to the higher education sector?

answer: It’s insulting that these companies claim their products work at a PhD level. students, when the only way that is possible is through systematic theft, the theft of the intellectual property of countless academics.

And while many of these agreements state that these companies may not use our data for training, the only data provided to us is very high-level usage data, such as how often people in our community log into our programs. We don’t know how people use these systems.

For example, students can feed their classmates’ papers into Gemini and ask it to generate responses and criticisms that can be disguised as their own. If your instructor uses one of these programs to grade student papers, you may be doing the same thing. That means someone’s work could be entered into these systems without permission and these companies could plagiarize it.

It feels like the Wild West now. People can use these systems however they like.

Q: What can the higher education sector do to protect its role as a center of knowledge creation and rigorous intellectual inquiry?

answer: we must unite. No institution can do this alone.

There is a huge opportunity for the first leaders in higher education to stand up and say, “We support human intelligence, and we’re not going to support multi-trillion dollar companies to develop technology that allows employers to use AI instead of hiring humans.” We need to be clear that we support human intelligence and are only interested in AI to the extent that it helps us make humans smarter.

We have to start protecting our intellectual property.



Source link