Are pensioners funding the militarisation of AI?

AI For Business



Teachers, nurses, university employees and public servants saving for retirement appear to be unknowingly funding technologies that could be deployed to target or surveil civilians in conflict zones. According to new analysis conducted by Empower in partnership with the Business and Human Rights Centre (BHRC), Open MIC and Heartland Initiative, at least 182 private and public pension funds have invested in companies developing high-risk artificial intelligence (AI) systems.

What does this mean for investors and pensioners? Read the full analysis and explore answers to frequently asked questions below.

FAQ: Are pensioners funding the militarisation of AI?

Q: Is generative AI already being deployed in conflict zones?

Yes. Researchers at the AI Now Institute note that foundation models are primarily used for intelligence, surveillance, target acquisition and reconnaissance. Others have added cyber warfare to that list. Concrete examples of deployment are emerging: for instance, Palantir has embedded generative AI into its defence products, which have reportedly been used by military forces in real-time operational planning and surveillance tasks in conflict zones such as Ukraine. Palantir Technologies responded to a previous request for comment stating that “providers of technology involved in non-lethal and especially lethal use of force bear a responsibility to understand and confront the relevant ethical concerns and considerations surrounding the application of their products…[t]his responsibility becomes all the more important the deeper technology becomes embedded in some of the most consequential decision-making processes”. Other countries, including the United States and India, have also tested AI-enabled systems in field operations, and the US has reportedly used generative AI in military operations in Venezuela and Iran.

Examples of generative AI facilitated defence systems:

Anthropic: Despite positioning itself as the ethical alternative to other generative AI companies, Claude was reportedly involved in the US operation to kidnap Venezuelan president Nicolás Maduro in January 2026 as well as the US and Israeli bombing of Iran in February 2026, which resulted in the killing of Supreme Leader Ayatollah Ali Khamenei. It remains unclear, however, exactly how the tools were used in both cases. An Anthropic spokesperson stated: “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise”.

Vannevar Labs: With a USD 99m Pentagon contract, Vannevar Labs uses its own and OpenAI/Microsoft large language models (LLMs) to analyse global social media and open-source data from 180 countries in 80 languages. Vannevar Labs’ AI system reportedly can detect national security threats, translate intel and gauge political sentiment of adversaries. While these tools promise efficiency and operational insight, their surveillance-scale data processing and risk of errors have also raised ethical concerns, particularly related to privacy, bias, civilian impact and accountability. “Our real focus as a company,” said Vannevar CTO Scott Philips, “is to collect data, make sense of that data, and help the US make good decisions.”

Anduril Industries: in partnership with OpenAI, Anduril Industries was awarded a three-year contract by the Pentagon’s Chief Digital and AI Office (CDAO) to scale a “tactical Edge Data Mesh” that will increase combatants’ access to systems that will “power new insights and real-time decision making”.

According to the company, it is already operational and it is designed to connect directly with sensors, weapons, platforms and robots. Anduril claims it will “accelerate the development of modern kill chains for every weapons system, in every service, and proliferate digital mass across every combatant command”.

Helsing and Mistral AI: A strategic partnership announced in in February 2025 set out Helsing and Mistral AI’s intention to “jointly develop next-generation AI systems for the defence of Europe”. The partnership reportedly combines Helsing’s military technology, including strike drones, with Mistral’s generative AI models “to enhance human-AI collaboration on the battlefield”.


Q: Why is deploying generative AI in conflict-affected areas high risk?

Generative AI creates less predictable outputs than other types of AI, making its susceptibility to bias, hallucinations, and misalignment particularly dangerous in high-stakes military uses such as targeting decisions. Generative AI mirrors patterns in its training data, which may underrepresent certain regions, populations or unique conflict scenarios. When information is missing, it invents plausible-sounding details. There is little public, peer reviewed evidence (as opposed to ad-hoc operational examples) that AI-powered weapons are more accurate or capable of managing complex, volatile conflicts. Regarding AI-powered drones deployed in Ukraine, industry experts note that the software frequently still requires refinement and that its effectiveness varies with battlefield conditions. Most military AI tools also lack publicly disclosed safeguards that demonstrate alignment with human rights or humanitarian law.


Q: What are the main concerns regarding AI-powered warfare?

First, generative AI is used to accelerate military decision-making, raising concerns about accuracy, lack of accountability and needless escalation.

Generative AI can accelerate military decision-making, potentially reducing human oversight and increasing the risk of miscalculation or escalation. As one NATO commander noted, “alliance members are now using AI in the decision-making loop of observe, orient, decide and act… The speed of operations will dramatically change.” While 25 countries signed a 2025 declaration committing to ensure AI-enabled weapons do not make lethal decisions without human oversight, major military powers – including the UK, US, and India – did not sign on, leaving a gap in accountability.

Second, the line between civilian and military use is increasingly blurred as Big Tech and startups alike put their foundation models to military use, resulting in civilian data being weaponised without consent.

OpenAI’s ChatGPT, Anthropic’s Claude and Meta’s LLaMA, originally developed for civilian-oriented chatbots and content generation, are being repurposed for defence. OpenAI’s partnership with Anduril Industries to explore AI-assisted counter-drone and defence applications, Anthropic and Palantir partnering with Amazon Web Services to bring Claude AI to the US Department of Defense, and Meta making LLaMA available for US national security efforts demonstrate deployment in military contexts.

Generative AI models that are trained on massive public datasets, including social media content and blogs, may indirectly inform military applications without the public’s knowledge or consent.

Therefore, generative AI is a dual-use component, and it can exacerbate conflict-related risk.

Recent analysis in +972 Magazine, the Financial Times and MIT Technology Review has highlighted these risks, including that the use of AI targeting systems by the Israeli Defense Forces (IDF) has purportedly meant that thousands of civilians – mostly women and children – were killed by air strikes relying on an AI program’s “decisions”.

One system, called Lavender, reportedly analyses information that has been collected via mass surveillance of the millions of residents in the Gaza strip in order to assess the likelihood of each individual being affiliated with Hamas. The machine allegedly scores each Gaza resident on a scale of 1 to 100, for “how likely it is that they are a militant.” Reportedly, the IDF authorised sweeping use of the system “with no requirement to independently check why the machine made that choice or to examine the raw intelligence data on which it is based.” Another AI system known as “Where’s Daddy?” allegedly tracks the individuals in their homes in order to carry out bombings, “usually at night while their whole families were present”. A United Nations analysis in November 2024 found that nearly 70% of deaths in Gaza over the past year were women (26%) and children (44%), and that 80% of victims were killed in residential housing.

Generative AI combines the lethal potential of the AI-powered analytics – used in the case of Where’s Daddy and Lavender – with unprecedented mass-data collection, accelerated decision making (if any) by humans in the loop, as well as reduced human oversight and understanding regarding how or why certain suggestions for military action are being “generated”.


Q: Which companies are implicated in the development of high-risk AI systems for defence applications?

We identified 32 companies that had developed, deployed, marketed, partnered or otherwise positioned their generative AI products for military, national security or defense applications, creating a likelihood of exposure to conflict-affected areas. According to public reporting, these companies have positioned themselves in a variety of ways, including by providing “mission critical software”, helping to “transform the speed and agility” of the military, and transforming “data complexity to decisive advantage” over adversaries amongst other offerings. While many of them reportedly have contracts with governments for national security and defense purposes, others have been linked to defense agencies or have relationships with partners developing military-specific applications. Another factor for inclusion were reports or announcements of executives, representatives, board members or other decision makers having direct links to military or intelligence agencies. Evidence supporting inclusion drew from company websites, product descriptions, press releases, news reports, and publicly available information linking the company to national security, military or defense agencies or initiatives.

visualization

Examples of company selection include:

  • Anduril Industries, Anthropic, OpenAI and Scale AI, which have secured contracts or agreements with the U.S. Department of Defense to develop or deploy AI capabilities for military use.
  • Cohere and Databricks, due to partnerships enabling deployment of AI systems across defence and government environments.
  • AI21 Labs and DeepSeek AI were selected based on investigative reporting linking their technologies to military or intelligence use cases, including surveillance and military intelligence.
  • Helsing GmbH and Mistral AI were identified due to explicit partnerships to develop next-generation AI systems for defence applications.
  • Thinking Machines Lab was included for announcements relating to a senior advisor affiliated with the company being sworn into the U.S. Army Reserve.
  • Insoundz was included for its development of generative AI audio technologies which are positioned for use in security contexts and “designed for surveillance,” promoting their products in defence- linked innovation competitions.

The companies include AI21 Labs, Aisera, Alan AI, Aleph Alpha, Anduril Industries, Anthropic, Cerebras Systems, Cohere, Comand AI, Databricks (including Mosaic), DeepSeek AI, deepset, Dynamo AI, H2O.ai, Helsing GmbH, Hive, insoundz, Mistral AI, OpenAI, Palantir, Pryon, Scale AI, SambaNova Systems, Shield AI, RAIC Labs (formerly Synthetaic), Thinking Machines Lab, Vectara, Vannevar Labs, Wisery Labs, xAI, Xiamen Yuanting Information Technology and Zhipu AI.

We focused on mostly privately held companies, given the additional layer of difficulty for investors to carry out human rights due diligence based on available information and the inability to exercise shareholder democracy, as is the case with publicly listed companies.

Company Selection Disclaimer: It is important to note that this company selection represents only a partial snapshot of the military and defence AI landscape, limited by the availability and transparency of public information. We invited all companies to provide a response to this report, encouraging them to mention any human rights safeguards they have in place. None provided a public response.


Q: Who is investing in these companies?

Venture capital and private equity investments play a decisive role. While these actors have the power to require technology companies to embed human rights and civilian protection-oriented safeguards into product design, they largely prioritise rapid growth and profit over accountability.

When BHRC surveyed ten of the largest VC funds and the two largest startup accelerators most actively investing in generative AI – including Andreessen Horowitz, General Catalyst and Founders Fund –  we found that only one out of the 12 firms states it conducts due diligence for human rights-related issues when deciding to invest in companies, and only one supports its portfolio companies on ensuring the development of responsible technology.

Limited transparency, complex startup ownership structures, and technological “black boxes” create environments where AI for defence can scale without oversight, bringing into question their approach to preventing civilian harm and unnecessary suffering.

Big Tech firms frequently use their venture arms to take minority stakes in AI companies, casting a wide net to maintain influence over emerging players. Prominent examples include Microsoft’s significant stake in OpenAI, Meta’s 49% stake in Scale AI, and Amazon and Google’s large minority stakes in Anthropic. This creates systemic risk for investors, as these companies wield disproportionate influence over the sector while avoiding direct accountability. At the same time, these firms have actively supported a light-touch or no-regulation approach to AI – illustrated repeatedly by journalists covering AI policy in the US and recent civil society reports on big tech’s lobbying tactics in Europe, India, Kenya, Brazil and Canada – further reinforcing concerns about concentrated power without corresponding oversight.

visualization
visualization

Investment Methodology Disclaimer: Because private capital markets lack transparency, exact investment amounts per round are difficult to determine. Instead, data from Preqin was used to assess an investor’s activity by counting the investment rounds they likely participated in. Please see the Preqin Terms and Conditions for more information about its data. While this approach doesn’t capture the full financial picture, it still provides useful signals about an investor’s likely activity.

Clarifications from investors mentioned in this report: 

All investors mentioned in this piece were contacted prior to the publication of this piece using publicly available contact information.

The reference to SAP is regarding the company’s investments in Aleph Alpha, Anthropic and Cohere, as announced by the company, and is not a reference to SAPPHIRE Ventures.

Transamerica has clarified that Transamerica sold its corporate venture portfolio in 2021, and Transamerica Ventures is no longer an active entity.

Tech company dashboards

Explore information on the human rights records of 120+ technology companies around the world.



Source link