Business schools seek clear AI guidelines

AI For Business


Unlock Editor’s Digest for free

From applications to final assessments and everything in between, artificial intelligence is transforming the way students learn, teachers teach, and universities operate. University leaders are responding to rapidly changing technology, even though the implications and the best policies to adopt are not yet clear.

Many business schools claim to be innovating because they are under pressure to adapt from students and corporate customers. But while academics and students alike recognize the need to learn about and apply AI, many are also concerned about its risks, from hallucinations and privacy to accusations of fraud.

“AI creates a real risk of disintermediating traditional education,” said David Marchik, dean of American University’s Kogod School of Business. “Universities need to adapt to incorporate AI fluency and literacy into every aspect of teaching and learning. For the past three years, we’ve been obsessed with bringing AI into every activity. But it’s very difficult. There’s no roadmap.”

Marchik’s views reflect a growing debate about the need for clearer benchmarks and guidelines to balance benefits with uncertainties and unintended consequences. The FT surveyed a number of business schools about their policies and the extent to which they have developed standards and indicators to measure progress and best practice.

There is little doubt about the recent increase in AI courses and degrees. A tracker created by the Center for Inclusive Computing at Northeastern University in Boston estimated late last year that there were 728 undergraduate programs focused on AI in computing departments alone at 584 universities across the United States.

Some content could not be loaded. Please check your internet connection or browser settings.

Open Syllabus, an organization that analyzes materials published on university websites, has identified a small but sharp increase in the proportion of graduate business courses that mention AI in the title, description, learning outcomes, or topic overview, just as other topics such as environment, society, and governance (ESG) have declined.

There are ample signs of innovation. The University of Cambridge’s Judge Business School has launched an AI-powered interactive educational case that allows students to interact in real-time with executives from a fictional battery manufacturer. At HEC, marketing professor Peter Ebbes is carefully experimenting with AI as an assistant to help improve marking.

Alessandro Di Rulo, chief executive of the Digital Education Council (DEC), a technology-focused university federation, highlights work at Bocconi in Italy, Imperial in the UK, HKUST in Hong Kong, and IE and Iese in Spain. He also noted that Northeastern University is developing AI training for faculty “peers,” which could help spread that knowledge to their colleagues.

DEC recently announced benchmarking ideas that focus on the following aspects: AI capabilities and curriculum. Educational innovation. Institutional preparedness and governance. Student and faculty experiences and ethics. and trust and inclusion. “We see a huge need and demand from our members and are asked to help us benchmark against our peers and competitors,” says Di Lullo.

However, translating principles into metrics is not easy. As he points out, “more is not always better.” For example, one possible benchmark is the extent to which GenAI tools are freely accessible to students and staff. However, this favors business schools, which have more resources to pay for access, and may be conditioned by whether and how students use the resources.

An analysis conducted by Anthropic last year found that many students using the company’s Claude AI assistant not only cheated, but also relied on the AI ​​assistant to generate assignment answers in a purely “transactional” manner. Rather than engaging in “dialogue” to learn, they were “offloading” higher-order thinking such as analysis and problem solving.

A separate human analysis of teachers suggested that many teachers were using the tool as an “extension” or collaboratively for tasks such as developing instructional materials and writing grants. Seven percent of teacher prompts were for grading student work, and almost half of these were graded using automatic methods delegated to the system, even though the system was perceived as unsuitable for the task.

Business schools questioned by the FT proposed new ways to benchmark progress, including tracking the extent to which AI is integrated into courses, how familiar students and faculty are with the tools, the scope of research that includes AI, and measures of student ability and understanding reflected in certificates.

One widely shared sentiment is the need for ethical guidelines regarding the use and integrity of AI by both academics and students. “Transparency is key,” says Jamie Prenkert, dean of the University of Minnesota’s Carlson School of Management. The university is piloting different approaches across its courses, some allowing the use of AI and others restricting it.

Opinions differ on the desirability and value of AI and the best way to integrate it. Developing metrics to analyze how pervasive it is in education and research will at least help provide guidance as the interpretation and understanding of the impact progresses. But one business school academic warns that “norms and possibilities are constantly changing.”

Join the conversation. Please email your thoughts on whether and how your business school should track AI best practices to: bschool@ft.com



Source link