This article first appeared in Digital Edge, The Edge Malaysia Weekly on March 9, 2026 – March 15, 2026
More than four decades of sustained artificial intelligence (AI) research is underpinning Canada’s push to become a global destination for talent and investment in the sector.
Canada’s AI identity was developed by pioneering researchers such as Yoshua Bengio and Geoffrey Hinton, who spent decades building the theoretical foundations of modern AI through their work on neural networks which earned them the 2018 Turing Award — the highest honour in computing. They are also widely regarded as the godfathers of AI.
Hinton’s contributions were recognised with the 2024 Nobel Prize in Physics, shared with John Hopfield.
That strength in fundamental research has already drawn in global tech giants and helped spawn international players from within the country.
“If you have very famous researchers, professors, tech leaders, they usually attract great talent. It’s an expertise that has helped us develop that sector, bring in investment and bring in additional talent,” a Canadian trade and investment official tells Digital Edge.
That body of work also became the foundation for three national AI institutes: Montreal Institute for Learning Algorithms in Montreal, Vector Institute for Artificial Intelligence in Toronto and Alberta Machine Intelligence Institute in Edmonton — all established under Canada’s landmark 2017 national AI strategy.
This is the Pan Canadian AI strategy, which was the first funded strategy of its kind in the world. The country has since invested roughly CAD742 million (RM2.13 billion) in AI.
The government has doubled down on that foundation, creating a dedicated minister of artificial intelligence and digital innovation and in late 2025 launching an AI Strategy Task Force to shape the next phase of the national strategy.
Digital Edge speaks to industry players to understand how Canada’s AI advantage was built, and where it is going next.
This article was developed with support of the Asia-Pacific Foundation of Canada and the government of Canada.
CIFAR builds on AI strategy
When Canada’s AI strategy was first launched, the objective was to build a strong domestic talent base.
Three national AI institutes were established — Montreal Institute for Learning Algorithms (Mila) in Montreal, Vector Institute for Artificial Intelligence in Toronto and Alberta Machine Intelligence Institute (Amii) in Edmonton — with the Canadian Institute for Advanced Research (CIFAR) coordinating an overarching national strategy to attract and retain researchers.
Alongside that, the CIFAR AI chair programme was established with the goal of recruiting leading AI researchers to Canada, while retaining its existing top talent.
Each chair holder is a professor affiliated with one of the three institutes, tasked not just with conducting research but training successive generations of students.
“What’s really important for the development of talent is, they’re also training generations of students who go out into the world. Some of them will become professors, and most of them will go into industry or create their own start-ups,” says Stephen Toope, president and CEO of CIFAR.
CIFAR’s coordination role across the three institutes has also been crucial, ensuring they develop distinct areas of specialisation rather than duplicating each other’s efforts, he adds.
“Canada is a really big country, and because it has provinces, as Malaysia has states, it would be very easy for there to have been really different approaches taken with very little coordination. CIFAR has functioned to help the various institutes coordinate their activity and make sure that we’re not tripping over each other duplicating efforts, but actually starting to have some areas of specialisation.”
Each major hub has developed a distinct research focus shaped by its existing economic and industrial strengths.
Toronto’s Vector Institute focuses on health, given the city’s concentration of hospitals and medical infrastructure; Amii on energy, reflecting Alberta’s status as an energy powerhouse; and Mila on serving small and medium businesses, in line with Montreal’s more diversified economy.
One of Vector’s original objectives — shared with Mila and Amii — was to solidify and grow Canada’s research base. By attracting and retaining top researchers at universities, Vector has enabled the creation of new academic programmes that feed a growing talent pipeline.
A key measure of that success is graduate retention, says Warren Ali, director of industry development at Vector. Vector has maintained a roughly 90% retention rate, with over 1,000 master’s students completing programmes annually.
As AI has moved from a narrow technical discipline to widespread public use, Vector’s mission has evolved with it, says Ali. Where the early work was about developing AI, the focus now is on ensuring that the same intellectual rigour that built the technology carries through into how it is deployed at scale.
The institute brings together five groups — universities, government, enterprise partners, healthcare organisations and start-ups — with 31 large multinational enterprise partners, 60 healthcare partners including major research hospitals in Toronto, and a start-up and scale-up ecosystem of roughly 300 companies.
“On the enterprise side, we try to keep our partners about six to 12 months ahead of what we feel is either more commercialisation ready or market awareness. We want to keep them on the leading edge, so we’re working in a very pre-commercial space,” says Ali.
“We’re not developing things that are going into production, we’re developing those early stage pilots, proof some concepts and use cases that are being evaluated, that are either going to enhance, influence or even disrupt things that they are building at the moment.”
The next step
Canada’s AI strategy has since evolved beyond its research foundations. The focus has broadened to encompass commercialisation, safety and two global priority areas Canada aims to lead in — AI for health and AI for energy. CIFAR’s own research agenda mirrors this ambition, spanning AI safety, drug discovery and the intersection of machine learning and neuroscience.
Through Canada’s AI Safety Institute, CIFAR is working on cybersecurity vulnerabilities, misinformation and bias. Toope says the goal is not regulation, but building self-correcting mechanisms inside the systems themselves.
This is to understand why hallucinations occur within the algorithm itself and to develop mechanisms that would prevent such outputs from being presented to users as valid answers.
“We know [AI risks] are real. The risks of misinformation around elections. The risk that AI [can produce a] completely fake video, and that people won’t be able to understand what’s true and not true. The chance that AI is going to disrupt the job market absolutely dramatically in the near future. All of these are very real concerns,” says Toope.
“It’s important that in parallel to the research that’s done on the evolution of AI, there has to be an equal kind of investment on the risks of AI. CIFAR right from the beginning has always had a heavy emphasis on responsible AI. And now I would say it’s stronger than ever.”
On the health side, researchers are developing AI tools that guide surgeons in real time during operations, using medical imaging to detect heart damage with greater diagnostic accuracy, and building intelligent prosthetics that connect directly to bone and respond to neural signals, thereby enabling more natural movement for amputees.
Meanwhile, CIFAR’s Learning in Machines and Brains programme brings together machine learning experts and neuroscientists in a two-way exchange. AI researchers draw insights from the human brain while neuroscientists use AI tools to study its extraordinary complexity.
One example is Alona Fyshe, who uses machine learning to analyse brain imaging data collected while people read or view images. Her research investigates how the human brain represents meaning and compares these representations with those learnt by computer models trained on similar data.
Combining classical and modern AI methods
The application of AI to robots in uncontrolled, real-world environments such as schools and hospitals is one of the most significant gaps in robotics research, says Mo Chen, principle investigator of the Multi-Agent Robotic Systems (MARS) Lab at Simon Fraser University.
The obstacle is data. Collecting it at scale in such spaces is prohibitively expensive, he says, and installing cameras throughout hospitals or schools raises privacy concerns.
Chen’s solution is a “hybrid AI” framework that combines two types of AI. They are the modern, data-driven machine learning to handle high-level intuitive decisions, and classical algorithms grounded in physics and established rules to handle low-level execution.
“The way modern AI works is you first collect a lot of data and then you try to train an AI model that will make good predictions. But before modern AI, which I would argue only came about in the last 10 to 20 years, there were still many decades of AI research, and some might call those classical AI,” explains Chen.
“Classical AI would include things like the game-playing AI, where we look at the next move, look at how the opponent might respond and so on. Path planning for robotics is also classical AI because we’re going to look at the different routes.”
The classical component draws on known physical constraints such as a person who is walking cannot teleport across a room. The data-driven component captures human intuition, which tells the robot where to move next.
To test whether the framework holds up in the real world, Chen’s lab built a follow-ahead robot — a wheeled robot designed to stay in front of a walking person.
The robot must predict where the person will be moments ahead and make large positional adjustments in response to even small movements. An early version using reinforcement learning alone worked well in simulation but failed in the real world, says Chen.
“It was working very well in simulation, but it would not work at all in the real world. I think it’s because any learning-based methods, including reinforcement learning, will have small errors that accumulate over time. And because the real world is a bit different from a simulation, these small errors will accumulate quickly, and then eventually we don’t really have anything that is useful.” The fix was to use reinforcement learning for high-level decisions such as where the robot should move next. Classical algorithms were used to handle the actual navigation like calculating the precise path, avoiding obstacles and controlling the robot’s movements to get from point A to point B.
Chen has a start-up that is attempting to commercialise this research. His start-up, MA Robot AI, is currently piloting the hybrid framework in a real hospital setting, focused on the autonomous delivery of lab samples through busy, unpredictable corridors.
The hospital pilot will begin with a single robot, allowing the team to gather user feedback and iterate before deploying a second version with improvements. The longer-term model is software licensing — embedding MA Robot’s technology directly into hardware from other manufacturers.

The intersection of hardware and software
There is significant overlap between hardware and AI development today, where building a product means not just designing and testing AI models, but also thinking carefully about where and how they are deployed, says Garry Chan, head of AI initiatives at VentureLab.
The relationship between the two domains is reciprocal because AI runs on chips, and AI can in turn be used to make chip design and manufacturing more efficient.
VentureLab is one of the few incubators in Canada that offers advisory services, investment readiness support and physical lab facilities under one roof, allowing semiconductor start-ups to design and test chips on-site, and AI companies to run machine learning models on in-house graphics processing units.
“When you’re building a product, you need to go through that product development journey — designing the AI model, testing it and so on. But now, oftentimes, there’s a whole deployment conversation. Where you deploy that AI makes a big difference,” Chan says.
“All of these require a range of capabilities — deploying a device, collecting data, scaling it, testing it, feeding it into the AI model and then inferencing it.”
VentureLab works extensively with edge computing companies, helping them optimise AI across a spectrum, from wearable devices and Internet of Things (IoT) sensors all the way to cloud infrastructure, says Chan.
Companies that need to work on product architecture, design and deployment to ensure their technology is built correctly for international expansion, and that the data governance meets the required standards across different jurisdictions can do so at VentureLab, he adds.
“If you look at Ontario or Canada’s industry base, there’s a lot of mining … food production, advanced manufacturing and construction. All of these basically require that range of capabilities with the deployment of maybe a device, an IoT device, collecting the data, scaling it, testing it, putting it into the AI model and then inferencing it,” says Chan.
“If you’re a mining technology company, you may already have customers, you may already have a technology that’s in market, but when you need to scale your technology to other countries globally, you need to make sure that your technology is architected the right way and that your data governance is up to standard.”
“That’s why they would come to an ecosystem like VentureLab to basically revisit some of the product decisions from design and architecture to deployment to think through how to scale this capability for global deployment.”
A look at BC’s AI ecosystem
Building a thriving AI ecosystem comes down to four things — talent, capital, computing power and customers. In British Columbia (BC), AI start-ups are rapidly growing, but capital and talent are a challenge, says Rob Goehring, executive director of the AI Network of BC.
BC is home to over 600 AI companies — from core model developers to service providers, agentic AI development shops and businesses using AI as a product ingredient. Of those, 150 to 200 are “AI native”, meaning AI is the core of what they do, not just a tool they use, says Goehring.
There are also established players such as Sanctuary AI, which is building humanoid robots for industrial use, and Variational AI, which uses generative AI to design novel drug candidates.
Goehring says access to venture capital is a long-standing structural challenge for Vancouver and BC. Local venture capital exists, but it is thin, and the angel community is ageing.
Some Seattle-based investors have warmed to the Vancouver market, drawn by lower valuations and a favourable exchange rate, but not enough of these investors exist, he says.
“We are capital-starved. Our start-ups will get half a million dollars, where our US counterpart will get US$25 million. The disparity is that large. Now, getting US$25 million doesn’t make you a better company. It just gives you more room to make errors and ability to scale faster, but that’s not always the secret to success. But getting US$500,000 versus US$25 million, it’s still nice to get US$25 million,” says Goehring.
BC has deep engineering expertise, anchored in part by major Microsoft and Amazon offices in Vancouver. But Seattle and the Silicon Valley are in the same time zone and a short flight away, and the compensation packages on offer there are difficult to match, he adds.
Goehring points to immigration red tape as compounding the problem. He calls for a fast-track visa system for highly qualified individuals, arguing that Canada’s existing rules are too complex and slow.
“The structural ones around cost of living and income disparity are macroeconomic challenges that are very, very hard for any of us to solve, and even for the government to solve. That’s a flywheel that needs to be built over a decade,” he says.
Save by subscribing to us for
your print and/or
digital copy.
