In April 2024, I covered the risks and benefits of leveraging artificial intelligence tools to support companies’ corporate tax risk management efforts. This applies equally to Value Added Tax (VAT) and other government legislation-driven schemes.
We now know more about the impact that increased reliance on these solutions is having on people. This should raise concerns not only for organizations whose internal functions are independent, but also for those that rely on external consultants.
I would like to start by noting the cognitive damage being done and highlighting it as an everyday example.
I’ve been driving for years. Beyond that skill, I know how to fill up a car with gas and unlock the hood. We cannot guarantee that we will be able to open the hood of all cars. Do you want to replace the wheels? Fairly capable.
When we tried to get customers to fill up their own petrol in the UAE, there were cases where people didn’t know how to do it. Before you laugh at that memory, I’ll confess that I’m a Luddite, citing various examples. How can this happen?
As humans, we can get lazy and forget how things are done. We become dependent and lose the ability in the field of activity for a person to be objective. More importantly, it also slows down our inquisitiveness.
A study published in June 2025 by scholars at the Massachusetts Institute of Technology focused on the cognitive abilities of three groups of people. One was not allowed to use the internet, one was, and the last one was allowed to use whatever large-scale language models and AI tools they wanted.
When analyzing brain connectivity, those using AI had the weakest results overall. Even worse, some people had trouble quoting their output when asked. This makes sense since it’s not their job.
Curiosity and careful experience have been central elements that have propelled humanity forward for centuries. Are we replacing bureaucratic buffers born of the memory of mistakes with ideocracies committed to faceless systems?
Risk-averse employees will naturally gravitate towards these solutions. Why not? Management will have little ability to fire the AI, and if something goes wrong, all the blame will be on the AI.
Ask yourself. How smart is your regulatory regime? When asked a question, do they answer by deconstructing the issues posed? Do they examine the query and uncover aspects that were not initially considered? Are positive feedback loops in answers unconsciously teaching askers to better consider and frame future questions?
Or do their answers arrive with a definitive affirmation, leaving little doubt that the foundation is solid? Please note that references to legislation may be included, as they are particularly detailed in the layout of clauses, clauses and sub-clauses. How a lawyer responds formally in writing and how a tax advisor responds are similar to different languages.
If you’re looking for tests to watch out for, the tests above would be the first red flags I notice. This doesn’t mean the advice is wrong, but it does suggest that the source is outside the function, and there’s no guarantee that the provider really understood it. Finally, it strongly suggests that this issue has not been broken down and considered.
Perhaps now is the right time for a second set of eyes to assess internal capabilities? External providers who have these concerns highlighted during engagement may want to spend more time than usual providing reassurance. Depending on the emergency time and available budget, this level of comfort may not always be allowed.
A peek behind the fence at how the UAE’s federal tax authority is developing and deploying AI. Last month, the agency announced what it called “five major AI-powered tax initiatives.” The most interesting point related to this article is the creation of internal FTAgpt. This software is aimed at your own employees and supports their ability to respond to external inquiries.
As anyone who has used one of these engines can tell you, learning how to ask questions the right way is essential to getting useful results.
The easiest way to understand this is with this simple math example. 2 * 3 + 4 = 10. However, 2 * (3 + 4) = 14. Adding parentheses changes which elements of the query are resolved first. Similarly for AI, the order and phrasing of questions is essential.
If the FTA provided some guidance, or preferably detailed query construction information, to entities with questions, it would speed up the process and lead to more effective results, a welcome outcome for all.
