computer scientist herriot watt university published a study warning that generative use is on the rise. A.I. Designing, building, and operating machine learning systems involves risks that are not properly considered by developers and the public, including cyberattacks, data breaches, and bias against underrepresented groups.
Professor Michael Lowndes, from Heriot-Watt University’s School of Mathematics and Computer Science, published the findings in Patterns, a data science journal published by Cell Press. This paper, titled “Pittfalls and Risks of Generative AI in Machine Learning,” identifies four ways in which generative AI is currently being applied within machine learning systems and examines the compounding risks each poses.
Where risks are compounded
Professor Lowndes has identified four applications of generative AI in machine learning, each with different risks. As a component within machine learning pipelines, it is the design and coding of those pipelines, the synthesis of training data, and the analysis of machine learning output. Risks increase significantly when LLMs are used for multiple tasks within a single system, or when they behave agentically, i.e. when they can autonomously use external tools to solve problems.
The central concern is that because the model operates without transparency, LLMs make mistakes and hallucinate information in ways that are unpredictable and not easy to assess. This opacity poses problems specific to regulated sectors.
Professor Lowndes said: “In fields like medicine and finance, there are laws in place to ensure that machine learning systems can demonstrate that they are trustworthy and explain how they arrive at decisions. As soon as you start using LLM, that becomes very difficult, because LLM is very opaque. It’s important that the public is aware of the limitations of GenAI systems.”
Cost reduction becomes a risk factor
The paper draws a direct line between commercial pressures and the adoption of generative AI in machine learning, warning that cost reduction is the primary motivation for many adoptions and that this creates risk without equal reward for end users.
Professor Lowndes said: “Companies will implement these systems for reasons such as cost savings. While this can improve the end-user experience, it can also have negative effects such as bias and inequity.”
Regarding complexity risks, he added: “If you have Gen AI working in different ways within your machine learning workflows and systems, they can interact in ways that are unpredictable and hard to understand. My advice at this point is to avoid getting too complex about how you use Gen AI in machine learning, especially if you are in a sector with high risks that impact people’s lives and livelihoods.”
Caution, not prohibition
Professor Lowndes stops short of completely opposing the use of generative AI in machine learning, framing his position as a call for good judgment. He said: “Machine learning developers need to be aware of the risks of using Gen AI in machine learning and find a wise balance between increased functionality and associated risks. Given the current limitations of generative AI, I think this is a clear example that just because you can do something doesn’t mean you should.”
The paper arrives amid growing pressure in EdTech and other fields to rapidly integrate generative AI into existing systems. Machine learning is already being incorporated into high-stakes education and workforce contexts, such as assigning students to programs, processing applications, and personalized learning tools.
