National Institute of Standards and Technology I want public feedback on the plan Develop guidance on how companies can implement different types of artificial intelligence systems in a safe way.
NIST on Thursday released a concept paper on creating control overlays to protect the agency's widely used AI systems SP 800-53 Framework. Overlays are designed to enable businesses to implement AI in a way that maintains the integrity and confidentiality of the technologies used in a variety of test cases.
Agents too Created a slack channel Gather community feedback on overlay development.
“Advances and potential use cases for adopting artificial intelligence (AI) technology pose both new opportunities and new cybersecurity risks.” The Nist Paper said. “The latest AI systems are primarily software, but they introduce security challenges that are different from traditional software.”
This project is currently based on five use cases.
- Adaptation and use of generator AI – Assistant/Large language models
- Using and tweaking predictive AI
- Using AI Agent Systems – Single Agent
- Using AI Agent Systems – Multi-agent
- Security Management for AI Developers
The rapid acceleration of AI use in corporate environments has created opportunities for businesses to improve workplace productivity, but it has also cited serious concerns about whether technology can be safely implemented.
Researchers have identified several ways that malicious actors can use AI agents to steal or corrupt data. During the recent black hat meeting in Las Vegas, Zenity Labs researchers demonstrated How hackers control top AI agents and weaponize them for attacks such as manipulating critical workflows.
AI is also a tool for attacks. July, Carnegie Mellon researcher It has revealed that large-scale language models (LLMs) can autonomously launch cyberattacks.
