For years, Silicon Valley has competed for talent with ever-richer compensation structures built around salary, bonuses, and capital. Now the fourth item is creeping in. That’s AI reasoning.
As generative AI tools become more integrated into software development, the cost of running the underlying models (known as inference) has emerged as a productivity driver and a budget line that finance leaders can’t ignore.
Software engineers and AI researchers within tech companies are already vying for access to GPUs, and this AI computing power is carefully allocated based on which projects are most important. Some technology job seekers are now starting to ask how much AI computing budget they will have access to if they decide to join a company.
Thibault Sautiaux, engineering lead at Codex, OpenAI’s AI coding service, recently wrote about
He added that usage per user is growing much faster than overall user growth, indicating that AI computing is becoming more scarce and valuable.
This scarcity is changing the way we think about engineering jobs and pay. Greg Brockman, president of OpenAI, said bluntly: “The availability of inferential computing will make software as a whole increasingly more productive.”
In other words, access to AI may soon become as important as access to big salaries or big stock awards. As a programmer in the AI era, if you don’t have access to large-scale computing, you may end up producing far less software than your colleagues, potentially threatening your career prospects.
Hakeem Shibly, a data specialist at Levels.fyi, recently discovered a compensation submission from a software engineer that listed a “Copilot subscription” as part of his salary and benefits. This is a small but symbolic step towards AI access as a standard benefit.
Receive rewards in AI tokens
Some in the AI community see an even clearer future.
“OpenAI and Anthropic should create a job site where clients can advertise their roles by listing their token budget and salary range for the job,” said Peter Gostif, AI feature lead at Arena, a startup that measures model performance. The startups did not respond to requests for comment Monday.
Investors are paying attention. Theory Ventures’ Tomasz Tunguz said companies are effectively adding AI inference as a fourth component of engineering compensation: salary, bonuses, stock, and now tokens.
Tokens are the economic language of generative AI. The model breaks down words and other input into numeric tokens for easier processing and understanding. One token is about 3/4 of a word. It will also be used to price the use of AI models via an industry-standard cost per million tokens.
“Will it be paid in tokens? We’ll probably start getting paid in 2026,” Tungs said.
CFOs are paying attention
For CFOs, this potentially large new expense needs to be tracked as closely as other headcount-related costs, Tungs said.
“That’s starting to happen,” Tungs told me, as the use of AI by employees increasingly contributes to total cash burn. “This is a consideration for the CFO’s office.”
Levels.fyi puts the 75th percentile salary for software engineers at $375,000, so Tunguz estimates that adding $100,000 to the annual inference cost would bring the fully equipped cost to $475,000. This means that over 20% of compensation costs could come from the use of AI in the future.
A key question for finance leaders is: What are the benefits of spending on AI? If cloud infrastructure performance is measured by gross profit per GPU hour, Tunguz suggests that the employee equivalent is productive work per dollar of inference.
Tunguz incorporates AI tools and models into his daily workflow, automating 31 tasks a day at an annual cost of approximately $12,000 in inference.
“Are engineers still spending $100,000? They should be 8x more productive!” he wrote in a recent LinkedIn post.
If this trend continues, 2026 could be the year that engineers negotiate salaries not just in dollars or stocks, but also in tokens.
Sign up for BI’s Tech Memo newsletter here. Please contact us by email. abarr@businessinsider.com.
