What are AI “hallucinations” and how can architects prevent them?
AI illusions refer to AI models producing output that is inaccurate, meaningless, or simply fabricated. This is the biggest drawback of genAI and one thing to be careful about. However, hallucinations are a known phenomenon within the genAI tool and may seem entirely plausible.
May suggests that organizations can partially mitigate such risks by ensuring constant human checks or by limiting the use of generative AI tools to matters where hallucinations do not have a significant impact.
Organizations should also discuss with their insurance brokers the intended use of AI tools, either as part of their scope of work/services or as part of their deliverables, to ensure that they are fully insured in the event of a claim related to the use of AI tools.
Also relevant to this discussion is how architects will be compensated if, for example, data is lost or compromised while using the genAI tool, which causes problems for the project.
It’s always better to talk to your insurance broker for clarification than to find out later that there could potentially be insurance coverage issues, May says.
She also points out that internal guidance, policies and training will be important to ensure awareness and best practices within the organization.
Are there security, ethics, or bias issues with AI models?
According to Mays, any business should ideally be comfortable and be clear about how data input is (or isn’t) stored and where that data is located for security reasons. It is often helpful to speak with your genAI provider, your in-house IT team, and your practice’s digital specialist to understand how security procedures are in place and whether your practice needs any training or guidance to ensure good practices are followed.
Some commenters have expressed concern that bias exists in the historical data on which models are trained, which could affect the AI tool’s response to prompts. Again, having a human checking for such bias and bad historical data can often be the best way to manage such issues.
May also recommends checking whether your organization’s use of AI complies with the General Data Protection Regulation (GDPR).).
Thank you to May Winfield, Global Director of Commercial, Legal and Digital Risk at Buro Happold. This article does not constitute legal or professional advice and readers should seek professional advice before acting on its contents.
Text by Neil Morris. This is a professional feature edited by the RIBA Practice team. Please give us your feedback and ideas.
RIBA Core Curriculum Topics: Design, Construction and Technology.
As part of our flexible CPD programme, specialist features count as microlearning. Find out more about the latest CPD Core Curriculum and how to meet your CPD requirements as an accredited member.
July 3, 2025
