High-profile gaffes over use of AI set off alarms in Singapore and other countries

Applications of AI


Singapore: Is it ethical to publish content generated by artificial intelligence based on past interviews with celebrities?

The public backlash against Esquire Singapore’s AI-generated interview with Kenyu Mac, star of the Netflix live-action series One Piece, seems to suggest otherwise.

The lifestyle magazine explained that the decision to use AI to fabricate interview answers was made after the actor failed to answer questions in time for publication.

What are some other recent AI application failures that shouldn’t be repeated? The Straits Times highlights nine high-profile cases.

1. AI-generated interview by Esquire Singapore

For the March 2026 issue of Esquire Singapore, Japanese-American actor Kenyu Macken (known only by this name) participated in the cover shoot, but was unable to answer interview questions via email.

To fill this gap, our editorial team uploaded past interviews with celebrities to our AI tools Claude and Copilot to generate fresh answers to new questions.

The piece quickly drew backlash, particularly for its AI-generated comments about her relationship with her late father, and was later removed.

Critics criticized this as lazy and unethical journalism, but Esquire defended it as an intentional creative choice related to the issue’s theme, saying it took note of the feedback for future editorial decisions.

2. Lawyers cited fictitious cases and were ordered to pay S$5,000 (US$3,925) each.

In March 2026, two lawyers were ordered to pay S$5,000 each after two fictitious cases were cited in the final submission of a civil dispute.

Goh Peck San, lead attorney at PS Goh & Co, engaged Amarjit Singh Sidhu of Amarjit Sidhu Law Offices to draft the final submissions.

Amarjit then handed over the job to a paralegal. The paralegal retired from the law firm in July 2025 and could no longer be contacted.

Neither attorney was aware that AI tools were being used to prepare the draft, and the citation fabrication was only discovered after opposing counsel alerted them in a reply filing.

This incident marks the second reported case of AI-generated fake legal citations in Singapore.

In October 2025, lawyer Lalwani Anil Mangan of DL Law Corporation was fined S$800 after a non-existent case generated by AI was discovered in court documents.

Justice S. Mohan warned that such mistakes pose a serious threat to the administration of justice.

3. The New York Times cuts ties with freelancers after using AI in book reviews

Freelance writer and journalist Alex Preston used AI tools to draft a January 2026 book review for The New York Times, but ended up incorporating phrases and descriptions from a Guardian review published in August 2025.

The similarities were discovered by a reader and led the American newspaper to investigate Mr. Preston, who wrote six book reviews for the publication between 2021 and 2026.

He said AI was only used in his latest work, but the Times said it would stop working with him in March 2026.

The paper said Mr Preston’s reliance on AI and unsourced material violated the paper’s editorial standards.

Mr Preston, who is also the author of several books, apologized to the Times and the Guardian and to the authors of the Guardian’s book reviews.

4. AI teddy bear sold by Singaporean company talks about sexual fetishes

In early November 2025, Kunma, an AI teddy bear sold by a Singapore company, issued a warning to testers after the toy discussed sexually explicit content and gave advice about where knives, pills, matches, and plastic bags could be found in the house.

The $99 toy reportedly veered into graphic sexual content after users activated it with the word “kink.”

Singapore-based company Foro Toys removed toys from shelves and spent a week tightening content controls and child safety guardrails, gradually bringing back sales.

5. Deloitte refunds Australian government for AI fabrication in report

A July 2025 report by financial services firm Deloitte Australia was found to contain fabricated quotes from Federal Court judgments and references to non-existent academic research.

The findings forced Deloitte to refund more than 20 per cent of the A$440,000 (US$310,860) it paid to the Australian Department of Employment and Workplace Relations to prepare the report.

Deloitte published a revised version of the report in October 2025 after the errors were flagged by health and welfare law researchers at the University of Sydney.

The company also revealed that it used Azure Open AI to create the report.

6. Australia’s top lawyer apologizes after AI fabricated fake case citation

In August 2025, Australian independent barrister Rishi Nathwani apologized after court filings in a teenage murder case were found to have included fabricated AI-generated citations and non-existent verdicts.

The resolution of the case was delayed by 24 hours due to false documents submitted to the Supreme Court of Victoria.

Natwani is a Crown Counsel, a title given to Australia’s highest level legal professional.

7. Car maintenance company that posted fake AI reviews gets caught

Over a two-year period, the owner of auto service company Lambency Detailing used ChatGPT on auto marketplace Sgcarmart to impersonate eight customers and fabricate five-star customer reviews.

Lambency Detailing’s ruse was discovered when a customer discovered a review posted in her name and filed a complaint with consumer watchdog Singapore’s Competitions and Consumer Commission.

An investigation into Lambency Detailing’s holding company Quantum Globe began in January 2025 and revealed the names, vehicle registration numbers and vehicle photos of seven other customers had been published without their permission.

Quantum Globe has been put under wraps by consumer watchdog groups.

8. A non-existent book recommended by a Chicago newspaper

Authors such as Chilean-American author Isabel Allende and Korean-American journalist Min Jin Lee exist, but none of their books appeared on the Chicago Sun-Times’ March 2025 summer reading list.

The fabricated titles were generated by Claude, an AI assistant, after freelance writers used the tool to create reading lists, some of which included real books.

After the incident, the newspaper said it would specify when content was provided by a third party and reviewed its relationships with contractors to ensure they met news organization standards.

9. Airlines held liable after chatbot provides incorrect information about refund policies

Air Canada has been ordered to pay more than CAD 800 (US$ 578) after its chatbot falsely advertised to a passenger attending his grandmother’s funeral that he could book a ticket at full price and later request a bereavement discount refund.

That information was later found to be inaccurate.

The passenger sued the airline, which argued in its defense that the chatbot was a “separate legal entity” responsible for its actions.

A Canadian court rejected this argument and ruled that airlines remain liable for all information published on their websites, including information provided by chatbots.

How to identify AI-generated content

1. Unnatural details and contradictions

Images and videos created with AI tend to include extra fingers and teeth, background distortion, and jerky or unnatural movements (strange eye blinks and facial expressions). Audio playback that is out of sync with lip movements and on-screen actions is also a red flag.

2. Strange repetition

Overly polished or repetitive sentences are signs of AI-generated text. Buzzwords and jargon are often used to fill gaps in the understanding of AI tools.

3. Verify against reliable sources

Alarm bells should ring when text, images, or videos lack information about their source, yet make highly provocative claims.

Users can also use image search tools to trace an image’s original source, examine available metadata, and match material to trusted news organizations and official sources. – Straits Times/ANN



Source link