Washington – The Business Roundtable today issued the following statement regarding comments submitted by the organization in response to the Office of Science and Technology Policy (OSTP) Request for Information (RFI) on the National Priority for Artificial Intelligence (AI).
“Continued advancements and adoption of AI are poised to benefit America’s businesses, consumers, and workers. We have a shared responsibility to maximize the social and economic benefits of AI.” Business Roundtable CEO Joshua Bolten said: “Members of the Business Roundtable believe that their companies are among the world’s largest developers and users of AI and that they develop and deploy these technologies responsibly to build trust and acceptance of AI. I am working hard to.”
Roundtable comments refer to the Responsible AI Roadmap for Organizations (RAI), which outlines principles to guide the development and use of responsible AI technology in the enterprise. The roadmap was launched with a series of policy recommendations as part of the launch of the Business Roundtable RAI Initiative in early 2022.
Part of the submission documents read:
“…the business roundtable roadmap includes ensuring that organizations that develop and deploy AI protect the rights and security of the American public at every stage of the AI lifecycle, and that corporate AI oversight and governance ensure responsible AI adoption. It contains the principle of bringing
- While recognizing the opportunity for AI to mitigate human bias, implement safeguards against unfair bias where AI systems can have serious and significant consequences for individuals.
- Where possible and appropriate, especially for systems with potentially significant and significant consequences for individuals, the relationships between inputs and outputs of AI systems and the extent to which such inputs and outputs are governed by human oversight explain what
- Provide AI system implementers with sufficient information and training to support responsible and reliable downstream use.
- Disclose to the end user when the end user is interacting directly with an AI agent (such as a chatbot) that simulates human interaction.
- Continuously assess and monitor model fitness and impact so you can tune fit-for-purpose, accuracy, and resilience. ”
Roundtable comments also outlined the principles of government oversight contained in the 2022 Policy Recommendations.
“As far as measures are implemented through government rules, regulations and standards, it is important to remember that there is a wide range of AI application situations and corresponding risk levels. A risk-based approach must be taken to avoid overregulation of the use of AI that does not have a significant impact on individuals or may harm society. If government action is required for the
- Any regulatory approach to AI considered or adopted in the United States must be situational, risk-based, proportionate, and use-case specific. Any framework, guidance, or regulation should be tailored to specific AI use cases rather than regulating technologies and applications entirely broadly, and appropriately tailored to the risk of material harm. is needed.
- AI measures should encourage conscientious and proven efforts to comply with requirements, norms and standards.
- When developing these measures, policymakers should thoroughly assess existing regulatory gaps before enacting new regulations, avoid overlapping and conflicting rules, and understand where guidance is most needed. be.
- AI measures should include a clear definition, accompanied by case study examples informed by ongoing dialogue with industry stakeholders.
- Policy makers should consider the use of evidence-based regulatory approaches and tools (such as regulatory sandboxes) that enable replication of governance practices and opportunities for industry to discover and share best practices.
- Finally, governments could encourage industry to undertake self-assessments, whether such work is done internally or against external guidelines and standards. ”
Additionally, the roundtable provided examples of how companies are aligning their use of AI to the principles of the roadmap. These comments also underscored the need for national data privacy laws, cited the NIST AI Risk Management Framework as a strong example of public-private partnerships, and emphasized the importance of public-private investment in a future-proof workforce.
Click here for the full comment.
