New research in the UK and Japan shows that while people are open to MPs using AI as a tool, they are very reluctant to hand over democratic decisions to machines.
Artificial intelligence has permeated every corner of our lives and is starting to become a centerpiece of politics. Conservative MP Tom Tugendhat recently criticized MPs for using ChatGPT to draft parliamentary speeches, warning that elected officials should not rely on machines to make decisions. His comments capture broader concerns. Should AI be involved in democratic decision-making?
Proponents of AI in Congress argue that it could help lawmakers deal with the large number of bills, public submissions, and policy documents they have to deal with in the course of their jobs. But critics worry that over-reliance on AI could undermine accountability and public trust.

Flickr/British Parliament, CC BY-NC-ND
In our new research, the TrustTracker team surveyed people in the UK and Japan to find out where their representatives draw the line when it comes to using AI. They were cautiously accepting but far more comfortable with politicians using AI as a source of advice, but not as a substitute in decision-making.
In the UK, almost half of the 990 respondents said they did not even support the idea of MPs using AI in support. And nearly four out of five people outright rejected the idea of AI or robots making decisions on behalf of MPs.
The 2,117 Japanese respondents were a little more open-minded, but this is to be expected given Japan’s extensive experience with automation and robotics. But they also expressed strong opposition to the idea of delegating decisions to robots. Support for aid was high, but remained cautious.
Young men were consistently more supportive of AI in politics. Older people and women are more skeptical. And it turns out that trust is key. People who trust their government are more likely to see AI assisting their legislators.
Our results also largely reflected participants’ broader attitudes toward AI. Those who believed AI would be beneficial and were confident in using it were far more supportive. Those who fear AI strongly opposed it.
Oddly enough, ideology also plays a role, but in the opposite way. In the UK, people on the political right are more supportive of AI in parliament. In Japan, people on the left express more tolerance.
Public tolerance for the use of AI in politics exists, but there are limits. The public expects its representatives to use new tools wisely. They don’t want to hand over the reins to a machine.
It is important to distinguish between support and delegation. AI will make Congress more efficient, helping lawmakers scrutinize evidence, formulate better questions, and simulate the outcomes of policy choices. However, if the public feels that AI will replace human judgment, support will evaporate.
For Congress, an institution that depends on trust and legitimacy, this is a red flag. If reforms exceed public consent, public alarm may quickly turn into opposition.
national contrast
Cross-national comparisons are interesting. Japan has a culture of tolerance towards robotics and automation. Concepts like Society 5.0 see AI as part of a positive future for the nation. But here too people draw the line when it comes to political decision-making. In the UK, discussions tend to be framed in terms of ethics and responsibility. British respondents are generally more cautious, but also more polarized by ideology.
Taken together, these examples show that public opinion does not simply reflect cultural stereotypes. Support is conditional and situational, and is tied to widespread trust in politics.
AI is coming to politics whether we like it or not. If used judiciously, it could help parliaments function better, faster and more transparently. If used carelessly, it can undermine the trust and legitimacy that are at the heart of democracy. In other words, AI can advise, but not govern.
