Use AI responsibly

Machine Learning


Sahishnu Pudial

These days, when something goes wrong, people are quick to blame AI. But the deeper question is: Was it really AI, or was AI used as an excuse to avoid human accountability and responsibility? Improving AI skills requires three intersecting areas: core technical skills (computer vision and machine learning), applied skills (ethics, data governance, and regulatory understanding), and interdisciplinary expertise (expertise in human-centered design). Have we checked to see if we are all sufficiently equipped with these skills?

A recent controversy involving Member of Parliament Ashika Tamang claimed that a viral video of her dancing while holding a copy of the Constitution was created using AI. People were quick to criticize AI without considering the context or understanding the role humans play in content creation and distribution.

This is where the concept of human involvement (HITL) comes into play. HITL emphasizes that AI is not autonomous. It works in conjunction with humans. Humans design, train, deploy, and oversee AI systems. When errors occur, it is often not the AI ​​that is failing, but the human systems responsible for guiding, monitoring, and validating the AI’s output. In the case in question, the controversy focuses on how humans interpret, share, and react to AI-generated content, whether or not the video was generated by AI. The real failure lies not in the technology itself, but in its monitoring, context, and responsible usage.

Similarly, former Prime Minister Sher Bahadur Deuba claimed that the video purportedly showing a large amount of cash at Budhanilkantha residence was likely to be an AI-generated hoax during the Gen Z movement on September 8 and 9, 2025. Similarly, in a recent congress, CPN-UML Party leader Ram Bahadur Thapa blamed the election results on AI and algorithms, saying they reflected a misunderstanding of the election results. The autonomous power of AI.

Changing the narrative around AI mistakes is essential. Rather than saying “AI failed,” we should acknowledge the responsibility of the humans behind the system. Mistakes can occur due to data bias, misuse, lack of oversight, or over-reliance on automated output. By taking a human-involved approach, we ensure accountability, ethical use, and better outcomes.

This perspective is particularly applicable to Nepal. In Nepal, AI is emerging as one of the most innovative technologies of the 21st century and is expected to revolutionize healthcare, education, agriculture, governance, and media. AI brings both opportunities and challenges. However, the growing tendency to blame AI for errors and social problems risks undermining accountability and distorting public perceptions of innovation.

Misunderstandings and misplaced blame can hinder innovation and responsible adoption if the narrative is not corrected. AI is a tool, not a scapegoat. As the example of Ashika Tamang shows, we must focus on how humans use, monitor, and contextualize AI systems. By changing the narrative from one in which AI makes mistakes to one in which human participation requires accountability, we enable society to use AI responsibly and ethically.

The future of AI in Nepal and the world depends on accepting responsible human oversight rather than fearing mistakes. What we need to understand is that AI does not operate in isolation. Humans teach it, adjust it, and decide how to use it. So if something goes wrong, the lesson is not that AI is bad, but rather that we need better human governance, ethics, and responsibility in the use of AI.

How did you feel after reading this news?






Source link