“AI is better suited to “move slowly in order to move fast” rather than “move fast and destroy things.”“
A New York-based partner at a multibillion-dollar generalist VC firm made this comment as part of a study on how VCs are engaging with responsible AI. This comes from an industry built on disruption, and captures a shift in thinking that data shows is far more widespread than the prevailing narrative suggests.
Over the past 18 months, Reframe Venture, ImpactVC, and Project Liberty Institute have conducted extensive research, interviews, surveys, and engagement projects on responsible AI in venture capital. The project involves over 200 VC and growth equity funds and 80 LPs with over $6 trillion in assets under management across events held in the US, Paris, Berlin, London, Tokyo and Singapore.
Our latest research, an in-depth survey of 56 venture capital professionals from around the world, found surprising and alarming results. That said, despite the overwhelming belief that responsible AI is financially important, VCs are struggling to incorporate it into their decision-making.
This finding was further reinforced at the March 2026 Responsible AI conference, which brought together 70 VCs and LPs in New York.
Responsible AI Alpha
Almost three-quarters of the VCs we surveyed believe that companies with stronger responsible AI practices will be more financially successful, rising to 83% for venture capitalists with five or more years of experience. Additionally, 84% see direct investment opportunities in companies that place responsibility at the core of their proposition.
The logic is simple. Responsible AI avoids incidents and wins customers. High-profile bankruptcies and lawsuits over AI-related harms make the financial significance of AI risks impossible to ignore. As a result, the adoption of AI in enterprises has fallen short of the predictions of AI enthusiasts.
While the capabilities of AI systems have improved significantly, they still lack the security, accountability, and transparency that businesses need before integrating them into real-world workflows. Moreover, the impact of AI on human agency is a real and concrete operational challenge and societal risk. Approximately 84% of VCs surveyed expressed concerns about the potential for AI to disempower human users.
Against this backdrop, procurement expectations from large corporations are becoming more demanding, and responsible AI is moving from being a nice-to-have to a selling requirement. Reliable systems, transparent data processing, and human augmented design are rapidly becoming moats for real-world implementation. In regulated industries such as healthcare, defense, and financial services, trust is a prerequisite. Overall, companies that are building trustworthy AI are achieving commercial success.
Building an “AI responsibility stack”
In the current climate, Big Tech is creating standards that primarily serve competitive interests, while non-profits focus on frontier risks rather than implementation guidance, leaving a void for practical and commercially viable safety infrastructure. This white space is an investable gap.
Some startups are capitalizing on this market movement, with fast-growing subsectors emerging in AI safety, assurance, and agency infrastructure.
At a recent New York conference, the business case for such innovation was laid out on stage by Zoe Weinberg, a partner at Ex/Ante, a fund focused on turning AI risks into opportunities. ex/ante capitalizes on growing business and consumer demand for ownership and control, investing in tools that explicitly support human agency by improving privacy, security, and information integrity. Other funds, such as Juniper Ventures, are similarly jumping on the demand to make AI safe and beneficial to humanity.
It’s not niche funds that are discovering this gap in the market. 91% of funds surveyed believe there is a huge opportunity in the ‘liability stack’, drawing clear parallels to the emergence of cybersecurity over the past decade.
LP pressure
Despite this belief, only 14% of VCs surveyed rated their companies’ AI risk assessment capabilities as “good.” Additionally, only 27% feel they have sufficient expertise in-house, a dramatic contrast to the 74% who feel confident about ESG in general.
Regulation didn’t help here. While this topic has grown in importance, uncertainty around requirements, scope, and timelines has become a persistent frustration, outweighing the actual compliance costs. This makes it difficult for companies to commit resources to moving targets.
With a push from LPs, VCs are beginning to develop mindsets, expertise, and processes to overcome knowledge gaps.
In the US, where federal AI regulation remains fragmented and ESG-specific language carries political baggage, LPs appear to be stepping into a governance vacuum, directly asking questions about AI risk management in ways that bypass the ESG framework altogether. European funds have been subject to longer periods of regulation, so pressure from LPs on this topic is high, but reportedly has not increased significantly in recent years.
Still, around half of VC respondents expect responsible AI to appear in their next funding round, regardless of region.
Pressures from upstream are starting to shift incentives, budgets, and interest in wanting to know more about the subject. Reframe Venture’s resources, including the Responsible AI Due Diligence Tool and Responsible AI Training, are rapidly gaining interest from VCs around the world.
There is a strong and widely held belief among VCs around the world that responsible AI is good business. The data supports it, the market demands it, and the investment opportunity is real. But belief without action is just talk. VCs need to be proactive in realizing commercial opportunities, and LPs may reward them for doing so.
________________________________________________________________________________________________
Oliver Nixon is the Director of Research at ReFrame Venture.
