As ESG risks increase, access to AI will not necessarily be unlimited – are businesses ready? | Opinion | EcoBusiness

AI For Business


Companies are making huge bets on artificial intelligence (AI) with the premise that their investments will pay off. Corporate leadership decisions are mixing AI with clean air, ensuring that the quantity and quality your business needs is always available.

Companies and investors should focus on more than just the huge investment opportunities that AI implementation presents.

The obvious risks in doing so must be considered.

Chief sustainability officers and government affairs officers have much to offer in this regard. Our experience in addressing climate risks, managing supply chain challenges, and understanding how the downsides of globalization impact your business provides a suggested model for how to future-proof your AI deployments.

Leaders are now building strategies, reshaping operations, and making employee decisions with the implicit assumption that AI will remain just as accessible and affordable for years to come.

Businesses plan for risk scenarios to ensure business continuity against a variety of eventualities, from natural disasters to cyberattacks. This exercise allows you to test and improve your strategy and plan accordingly.

But few, if any, apply the same rigor to AI.

Ignoring the “responsible AI trilemma” (environmental damage, job losses, and rising inequality) will create significant constraints on access to AI.

These may include increased costs of electricity and water due to environmental impacts, increased public pressure due to unemployment and rising income inequality, and explicit government regulations that restrict access.

Boards and investors need to ensure that executives plan for a world without the AI ​​access they currently have.

By engaging in the exercise of considering possible future scenarios, companies can build resilience by testing their strategies against different situations.

Baseline scenario is not the only scenario

The same built-in resilience to other important risk factors must be built into AI.

For investors, this exercise can open up investment themes they hadn’t previously considered.

Many boards are engaging in this analysis when it comes to climate scenario planning, where companies prepare for different outcomes. This exercise does not ask business leaders to predict the future. This provides a model to stress test them against possible futures they may not have considered. It also enables better business planning and provides information to increase enterprise resilience.

Climate risks that can financially impact your business include physical risks such as rising temperatures and intensifying storms, and transition risks such as the introduction of a carbon tax or changes in consumer and employee behavior.

The risks that are most important to a given business are then stress tested against scenarios such as: Current (1.5 °C to 2 °C). In the middle of the road (2°C to 3°C). High physical and transition risk (above 3°C).

Similarly, there are three AI scenarios that companies should consider to future-proof their business. It can be unlimited and affordable, available but expensive, rationed or sovereign.

The baseline scenario assumes today’s operating environment, where AI remains unlimited and affordable. That may prove to be true. However, the growing political and social forces opposed to this suggest otherwise. The companies most in need of stress testing are those that operate under such assumptions.

Second scenario: AI is still available, but expensive. Computing costs will skyrocket in the future. Energy and water constraints are severe. Here, AI becomes more of a luxury. Large companies maintain access, while mid-market and small businesses receive lower prices. Planning for this scenario requires rethinking your competitive positioning should costs increase significantly.

Worst case scenario: AI is rationed or becomes sovereign. Governments intervene to control availability for economic and/or national security reasons. Data localization policies segment the market. To avoid disruption in this scenario, companies need to think critically about their AI supply chains and ensure that they have access to where they source their AI.

Companies know how to conduct supply chain due diligence. Going forward, we need to extend this to AI in any scenario.

Productivity at the expense of continuity

Businesses optimizing AI solely for efficiency and productivity Ignoring the potential costs of doing so puts business continuity at risk.

When AI becomes available, it is expensive and beyond that price point, companies that rely on it as part of their workforce will face the double whammy of higher bills and an inability to replace the institutional knowledge they give up.

If AI becomes rationed or sovereign, business models built around always-on, always-affordable AI may become completely unviable.

Companies that are most actively adopting AI are most at risk, and storm clouds are already forming. Data centers are being shut down around the world due to energy consumption and water shortages. Local opposition is blocking the project. Labor unions oppose the introduction of AI that will cause job losses, including opposition to driverless cars equipped with AI.

All of the above combine to threaten the operations and long-term health of your business.

Protection of operating license

Business leaders need to address the responsible AI trilemma within their own operations, and investors need to ensure they do the same.

This reduces risk for your business, reduces costs and provides immediate revenue. It can also protect you from growing backlash.

At the same time, we need to prepare for a different future.

You don’t need a crystal ball for scenario planning. Given the likely future, you need to identify which parts of your business are most vulnerable.

Boards need to ask: What happens to our strategy if our computing costs double? What operations will we be unable to execute if we no longer have access to a country’s model? How will our employees and customers view us if we deploy AI without any consideration for the impact on society?

Companies that treat access to AI as a strategic risk variable, such as carbon, water, or supply chain security, will be better positioned than those that treat it as a given.

If companies understand and mitigate the business and public interest risks associated with AI adoption, they will reap the rewards of better business while doing their part to mitigate the unchecked opposition to AI.

Otherwise, the worst case scenario can quickly become the base case.

Steven Okun is the CEO of APAC Advisors, a Singapore-based consulting firm specializing in geopolitics and responsible investing. Megan Willis is a senior advisor and Noemie Viterale is a manager at APAC Advisors.



Source link