The National Telecommunications and Information Administration (NTIA), a division of the U.S. Department of Commerce, has called for public comment on its strategy to promote accountability in trustworthy artificial intelligence (AI) systems.
The aim was to solicit stakeholder feedback to develop proposals for a future report on assurance and accountability frameworks for AI. These proposals may have guided future federal and non-governmental regulation.
Promoting trustworthy AI that upholds human rights and democratic principles was a key focus of the federal government, per the NTIA’s request. Nonetheless, gaps still remained in ensuring that AI systems were accountable and adhered to trusted AI rules on fairness, safety, privacy, and transparency.
Accountability mechanisms such as audits, impact assessments, and certifications can ensure that AI systems comply with credible standards. However, the NTIA observed that implementing effective accountability remains challenging and complex.
The NTIA discussed various considerations regarding the balance between trustworthy AI goals, barriers to fulfilling responsibilities, the complexity of AI supply and value chains, and the difficulty of standardizing measurements.
Over 1,450 comments on AI accountability
Comments were accepted until June 12 to help inform future NTIA reports and to guide potential policy developments around AI accountability.
The number of comments exceeded 1,450.
Comments that can be searched using keywords may include links to articles, letters, documents, and lawsuits regarding the potential impact of AI.
Tech companies react to NTIA
Comments include feedback from the following tech companies working to develop AI products for the workplace:
Letter to OpenAI NTIA
In a letter from OpenAI, it welcomed the NTIA for framing the issue as an “ecosystem” of AI accountability measures necessary to ensure trustworthy artificial intelligence.
OpenAI researchers believed that a mature AI accountability ecosystem would consist of generic accountability elements that apply broadly across domains, and vertical elements customized for specific contexts and applications.
OpenAI has focused on developing a foundation model, a broadly applicable AI model that learns from a wide range of datasets.
Regardless of the specific areas in which these models may be used, we believe we should take a safety-focused approach to these models.
OpenAI has detailed some of its current approaches to AI responsibility. To provide transparency on critical performance issues and risks of the new model, we are publishing a “system card”.
Conduct qualitative “red team” testing to explore functionality and failure modes. Quantitatively evaluate various abilities and risks. We also have clear usage policies and enforcement mechanisms that prohibit harmful use.
OpenAI has identified several important open issues, such as evaluating potentially dangerous features as model capabilities continue to evolve.
Discussed open questions regarding independent evaluation of models by third parties. He suggested that future underlying models with significant risk may require registration and licensing requirements.
While OpenAI’s current practices focus on transparency, testing and policy, the company appeared willing to work with policymakers to develop more robust accountability measures. It was suggested that a competent AI model may require a customized regulatory framework.
Overall, OpenAI’s response reflects the company’s belief that a combination of self-regulatory efforts and government policies play an important role in developing an effective AI responsible ecosystem.
Letter from Microsoft to NTIA
In response, Microsoft argued that accountability should be a fundamental element of the framework for addressing the risks posed by AI while maximizing its benefits. Companies that develop and use AI must take responsibility for the impact of their systems, and regulators need the power, knowledge and tools to exercise proper oversight.
Microsoft has outlined lessons learned from its Responsible AI program aimed at keeping machines under human control. Accountability is built into governance structures and responsible AI standards and includes:
- Conduct impact assessments to identify and address potential hazards.
- Additional monitoring for high-risk systems.
- A document confirming that a system is fit for purpose.
- Data governance and management practices.
- Advances in human direction and control.
- Microsoft explained how it conducts red teaming to uncover potential harms and obstacles, and publishes a note on transparency in AI services. Microsoft’s new Bing search engine applies this responsible AI approach.
Microsoft has made six recommendations to drive accountability:
- Build on NIST’s AI Risk Management Framework to accelerate the use of accountability mechanisms such as impact assessments and red teams, especially for high-risk AI systems.
- Develop a legal and regulatory framework based on the AI technology stack, including foundation models and licensing requirements for infrastructure providers.
- Promote transparency as a means to achieve accountability, such as through a registry of high-risk AI systems.
- Invest in building the capacity of legislators and regulators to keep pace with AI developments.
- Invest in research to improve AI evaluation benchmarks, explainability, human-computer interaction, and safety.
- Develop and coordinate international standards that underpin the assurance ecosystem, such as ISO AI standards and content provenance standards.
- Overall, Microsoft seemed willing to partner with stakeholders to develop and implement effective approaches to AI responsibility.
Microsoft as a whole seemed ready to partner with stakeholders to develop and implement effective approaches to AI responsibility.
Letter from Google to NTIA
Google’s response welcomed the NTIA’s request for comment on its AI liability policy. We recognized that enabling trustworthy AI requires both self-regulation and governance.
Google highlighted its commitment to AI safety and ethics, including a set of AI principles focused on fairness, safety, privacy and transparency. Google has also implemented responsible AI practices internally, including conducting risk assessments and fairness assessments.
Google advocated using existing regulatory frameworks where applicable and risk-based interventions for high-risk AI. A collaborative and consensual approach to developing technical standards was encouraged.
Google has agreed that accountability mechanisms such as audits, ratings and certifications can ensure trustworthy AI systems. However, he pointed out that these mechanisms face challenges in their implementation, such as the assessment of the many aspects that affect the risk of AI systems.
Google recommended focusing accountability mechanisms on key risk factors and suggested using an approach that targets the ways in which AI systems are most likely to have a significant impact on society.
Google advocated a “hub-and-spoke” model of AI regulation, with sectoral regulators overseeing AI adoption under the guidance of central bodies like NIST. He supported clarifying how existing laws apply to AI and encouraging proportional risk-based accountability measures for high-risk AI.
Like other companies, Google believed that greater accountability in AI required a combination of self-regulation, technical standards, and limited, risk-based government policies.
Artificial letter to NTIA
Anthropic’s answer describes the belief that a robust AI accountability ecosystem requires mechanisms that are tuned to AI models. Several challenges were identified, including the difficulty of rigorously evaluating AI systems and accessing sensitive information needed for audits without compromising security.
Anthropic supported the following funds:
- Model Evaluation: Current evaluation is an incomplete patchwork and requires expertise. It recommended standardizing competence assessments focused on risks such as deception and autonomy.
- Interpretability research: Grants and funding for interpretability research may enable more transparent and understandable models. However, regulations that require interpretability are currently unfeasible.
- Pre-registration of large-scale AI training runs: AI developers should report large-scale training runs to regulators to inform them of emerging risks under appropriate confidentiality.
- External Red Teaming: Mandatory adversarial testing of AI systems prior to release, either through centralized bodies such as NIST or via researcher access. However, red team resources are currently residing within private AI labs.
- Auditors with technical expertise, security awareness, and flexibility: Auditors need deep experience with machine learning while preventing leaks and hacks, but working within constraints that drive competitiveness. also need to be fulfilled.
- Anthropic recommended narrowing accountability measures based on model functionality and demonstrated risks assessed through targeted functional assessments. He proposed clarifying the intellectual property rights framework for AI to enable fair licensing and providing guidance on antitrust issues to enable cooperation on safety.
- Overall, Anthropic emphasized the difficulty of rigorously evaluating and accessing information about advanced AI systems due to their sensitive nature. He argued that assessing funding capacity, interpretability research, and access to computational resources are essential for an effective AI responsible ecosystem that benefits society.
what to expect next
In response to the NTIA’s request for comment, AI companies recognize the importance of accountability, but there are still unresolved questions and challenges regarding the effective implementation and scaling up of accountability mechanisms. I understand.
It also suggests that both corporate self-regulatory efforts and government policies will play a role in developing a robust AI accountability ecosystem.
Going forward, the NTIA report will make recommendations for advancing an AI responsible ecosystem by leveraging and building on existing self-regulatory efforts, technical standards, and government policies. is expected. Stakeholder input through the comment process could help shape these recommendations.
However, implementing recommendations for concrete policy changes and industry practices that transform how AI is developed, deployed and overseen will require coordination among government agencies, technology companies, researchers and other stakeholders. .
The road to mature AI responsibilities is expected to be long and difficult. However, these early stages indicate that there is momentum towards achieving that goal.
Featured Image: EQRoy/Shutterstock
