- 43% of organizations still don’t have an AI policy plan, report finds
- Currently, workers are adopting AI faster than companies are creating policies.
- Nexos.ai asks small businesses to have basic policies in place – you can evolve from there
Despite 70% of legal professionals already using general-purpose AI in their work, 43% of organizations say they have not yet developed a formal AI policy (and have no plans to develop one).
New research from Nexos.ai reveals that the biggest risks associated with AI tools may actually stem from a lack of visibility and governance.
And small businesses are generally most at risk due to the nature of having fewer resources, both in terms of workers and procedures.
Article continues below
AI will be largely unmanaged
Nexos.ai found that employees regularly paste contracts, NDAs, and legal documents into public chatbots to save time, putting sensitive information at risk. While the enterprise-grade AI product promises maximum data security and requires no training on customer data, the public version is less stringent.
Data security (46%) ranks as the top concern for legal teams, ahead of ethical issues (42%) and legal privileges (39%), but how employees interact with public chatbots doesn’t match that concern.
Nexos.ai also pointed out that legal AI workflows may already be in use in small and medium-sized businesses without being formally established and recognized, as the adoption of AI happens incrementally and without governance, leaving companies playing catch-up to manage the correct and safe use of AI even after employees have already started using the tools.
“The risk for small businesses is not the reckless use of AI, but the invisible changes in workflow,” writes product director Gilvinas Ghirenas.
But it doesn’t have to be difficult. The report explains that basic AI policies don’t have to be complex. It may be enough to define approved tools, prohibit use cases, and pinpoint sensitive data restrictions. Or at least it could be better than the current governance scenario.
Looking ahead, Nexos.ai suggests companies start with simple AI policies to ensure sensitive data doesn’t end up in unauthorized tools. The report urges companies to approve tools before their teams deploy them prior to AI adoption, but even after deployment, Nexos.ai recommends human oversight of AI-generated content before it is used for legal purposes.
“Efficiency will be achieved faster than governance if these tools are included before a company defines approved uses, data boundaries, and review procedures,” Gilenas concluded.
Follow TechRadar on Google News and Add us as a preferred source Get expert news, reviews, and opinions in your feed. Be sure to click the follow button!
Of course you can also do Follow TechRadar on TikTok Check out news, reviews, and unboxings in video format and stay updated regularly. whatsapp Too.
