Bangladesh has drafted a National Artificial Intelligence (AI) Policy to guide the use of AI in all sectors, with the aim of becoming a producer of homegrown innovations, modernizing public services and promoting inclusive economic growth.
The Draft National AI Policy Bangladesh 2026-2030 aligns the country’s technological ambitions with Vision 2041 and the United Nations Sustainable Development Goals (SDGs) and provides a formal framework for these goals.
The policy is based on an early 2024 draft that has not yet been finalized. Following the collapse of the Awami League-led government and the socio-political changes following the July 2024 uprising, the new draft law emphasizes digital sovereignty and aims to protect critical data, infrastructure and people’s rights.
National Bangla LLM Development
The basis of the draft policy is the development of an advanced Bangla-based national AI system (Large-Scale Language Model (LLM)), like OpenAI’s ChatGPT and Google’s Gemini, to digitize and preserve Bangladesh’s cultural and linguistic heritage. This model aims to make AI technologies contextually appropriate and inclusive while protecting intellectual property from foreign exploitation.
To support such large-scale innovations, the government will adopt a ‘National AI Computing Strategy’ under which centralized graphics processing units (GPUs) will be procured and hosted in national data centers for use by various institutions and researchers.
Funding will come from the ‘AI Innovation Fund’, which will provide Tk 200 million to Tk 250 million by 2030 for research, development and commercialization. Start-ups and academic institutions will also benefit from targeted tax and customs benefits on imports of essential hardware such as servers and accelerators.
Risk-based regulation of AI systems
The policy introduces a risk-based regulatory framework that classifies AI systems as prohibited, high risk, limited risk, or low risk.
Prohibited applications include social scoring, indiscriminate biometric surveillance, and deepfakes aimed at disrupting democracy and elections. High-risk applications such as healthcare, law enforcement, and credit assessment require algorithmic impact assessment and rigorous human oversight.
An independent oversight board established by an act of Congress audits AI systems for bias and recommends the termination of applications that violate ethical standards or human rights.
The policy also introduces strict liability for high-risk AI, ensuring that adopters are liable for damages, regardless of intent.
Steps to protect your work from AI threats
While AI could increase productivity by 4.3 percent, automation could threaten up to 60.8 percent of jobs in the apparel sector, impacting approximately 2.7 million workers and putting a total of 5.38 million low-skilled jobs at risk across the sector by 2041.
To prepare the workforce, AI education will be introduced from grades 8 and 9, in parallel with upskilling programs.
The draft plan prioritizes sectors with the greatest impact, such as agriculture and medicine. AI applications in agriculture support accurate irrigation, pest detection, and local weather forecasting for 16 million households in Bangladesh.
In healthcare, AI will aid public health management and crisis prediction, but life-changing clinical decisions will remain in the hands of certified medical professionals.
This policy is designed to remain in effect until 2030, when it will be replaced by the Permanent Artificial Intelligence Act.
Dealing with practical problems
Faiz Ahmad Tayeb, Special Assistant to the Principal Adviser at the Ministry of Posts, Telecommunications and Information Technology, said the draft law has three main goals: increasing AI readiness in institutions, academia and industry, increasing government efficiency with AI, and enhancing service delivery to the public.
“Furthermore, we have also addressed the risks highlighted by UNESCO’s AI Readiness Assessment, including gaps in data protection, interoperability and cybersecurity. Many of these issues are being addressed through cybersecurity legislation, data protection legislation, and other initiatives,” he said.
Tayeb added: “We are developing Bangla LLM to improve data access for academia and industry, make local knowledge searchable, and build interoperability and responsible data exchange between countries so that AI can effectively solve real-world problems.”
He explained that the new policy was needed just a year and a half after the previous draft, as the previous version primarily focused on infrastructure, whereas the current policy addresses practical issues centered on service delivery.
Ashraful Goni, a faculty member at New York’s Stony Brook University, praised the draft proposal for positioning Bangladesh as a rights-based, human-centered, sovereignty-conscious AI nation that prioritizes ethical governance over rapid commercialization.
However, he cautioned that “strong regulatory frameworks without sufficient technical capacity can unintentionally slow down innovation. Risk-based regulation, mandatory algorithmic impact assessments, and centralized oversight can increase compliance burdens for early-stage innovators. AI is rapidly evolving, and policy needs to keep up.”
