Taking a stand: How to ensure the ethical use of AI

Applications of AI


The eve of the U.S. and Israeli attack on Iran was the deadline for artificial intelligence developer Anthropic to capitulate to the U.S. Department of the Army’s terms for unfettered access to its AI tools or risk losing a $200 million contract. Anthropic stands firm and issued a statement saying that without guardrails against domestic mass surveillance and use for autonomous weapons, the company will not comply with the DoW’s demands. In response, Anthropic was designated a “supply chain risk” by the DoW.

This stance is consistent with the red line drawn by Anthropic CEO Dario Amodei. In a blog post, he outlined the risks posed by the proliferation of strong AI and its inheritance by state actors. Amodei’s policy responses range from practical and necessary insights to idealistic spinoffs of his own belief system, but he is very clear about the potential consequences of government abuse of AI.

Anthropic’s DoW stance highlights the reality of AI’s importance in government toolkits and the increasing risks associated with it. Without appropriate guardrails against misalignment, abuse, and significant economic consequences, the uncritical and unrestricted use of technology will have serious consequences. Avoiding these will require drastic and comprehensive political action, but there is currently little agreement among policymakers and AI developers about what form this should take.

Prevent AI abuse

Amodei said powerful AI could be easily exploited, giving terrorists the power to commit even worse atrocities by giving them access to information about biological weapons and cyberattacks, or by giving governments the power to carry out mass surveillance. AI’s transcription and classification capabilities enable a single record of an individual’s conversations and actions, enabling flagrant human rights violations.

A civil liberties approach to AI regulation in this case prioritizes individual privacy and autonomy and presents a framework in which legislative and policy measures can prevent this type of data collection.

Preventing government abuse of AI has two dimensions: international and domestic.

To curb the misuse of AI by foreign powers, Amodei recommends a “crimes against humanity” framework for using AI technology in authoritarian means. This would be complemented by a global “taboo” against these uses and a hawkish approach to trade, potentially limiting the sale of cutting-edge AI inputs to authoritarian regimes. This will require some international cooperation, but given the small number of countries producing such equipment, it may not be unreasonable.

Domestically, the situation is perhaps even more difficult. When the customer is the government, laws alone are not enough. While Anthropic may have taken a principled position on this issue, the swift acceptance of the DoW contract by its competitors illustrates the difficulty of setting ethical standards unilaterally.

Discrepancies regarding goals

The threat Amodei points out is not limited to the misuse of AI by states. He also highlights the risk of a “mismatch” in goals between AI and its users. While Amodei rejects the idea that hysterical predictions from fraudulent AI models are inevitable, he takes the concerns seriously and even acknowledges that laws are needed to steer AI development in a safe direction.

Mr Amodei called on research institutions to make coordination an important part of their training approach, stressed the need for research to identify the risk of misalignment early, and encouraged governments to take a proactive approach. This means monitoring live models used internally and externally, and disclosing mistuned behavior as required by law.

Mr. Amodei is absolutely right to call on lawmakers to develop proper oversight capabilities, as it is not in the interests of developers to allow AI to misbehave.

But for now, neither other major AI companies nor the U.S. government appear to be on board with Amodei’s approach.

Reducing economic impact

One of the most important areas of AI policy is how governments should respond to the economic disruption that AI may cause. Amodei offers eight recommendations, in which voluntary wealth redistribution as a “culture of philanthropy” appears twice. These fall into three categories: corporate responsibility, personal responsibility, and political responsibility.

While monitoring the impact of AI on turnover is certainly an important step in assessing the economic impact of AI, Amodei’s recommendations run into problems when it comes to mitigating it. He encourages companies to avoid laying off employees and continue to pay them even after AI replaces productivity. Without legal incentives to do so, this recommendation conflicts with the reality of corporate incentives.

Similarly, Amodei warns of the potential for rapid wealth accumulation from large-scale language models and AI’s impact on the job market. He cited Anthropic’s unique policy of donating 80% of its wealth and company stock to charity. But technology-driven revolutions are not new, and each one has increased the share of the world’s wealth held by the wealthy. Wealth accumulation through AI will not be remedied by the voluntary actions of a few wealthy people. Since 1975, U.S. inequality, as measured by the Gini coefficient, has steadily increased, hovering around 41.8 in 2023 from 35.6 in 1975. For voluntary philanthropy to work this time around, it will need to fundamentally break away from the economic impact of the technological innovations of the past 50 years.

Amodei also recommends macroeconomic shifts. Although we are not advocating a specific redesign of the tax system, the idea of ​​more progressive taxation and a tax specific to AI companies points to the need for comprehensive macroeconomic policies that take job losses into account. Faced with the threat of severe economic disruption – mass unemployment affecting entry-level workers and the white-collar middle class – a national macroeconomic approach is needed.

Domestic law alone will not be enough. Amodei recognizes the need for state intervention away from the concentration of economic power in the hands of AI producers, but this also requires a global effort. The concentration of wealth in the hands of a few people in a country through AI will reflect domestic concerns about global competition and lack of innovation.

This requires policymakers and public opinion to take AI developments seriously and critically engage with what this technology can and should do. We don’t need doomerism, but we do need comprehensive political action.

Jordan Nunn is Account and Content Officer At OMFIF.



Source link