Despite rapid adoption of AI, new research shows that only 37% of IT decision makers are making AI security a priority when implementing technology.
credit: This is an edited version of an article originally published on SME Today
In a survey of more than 2,000 IT decision makers, 94% said AI is now core to their organization’s strategy. However, as the use of AI expands, organizations are exposed to new risks from cybercriminals. Only about one-third of respondents cited cybersecurity as one of their top three concerns when implementing AI.
Specifically, when asked about the role of security in their AI strategy, 37% of respondents said it was a compliance requirement, an unnecessary expense, or not required. This suggests that many organizations view cybersecurity investments as a hurdle rather than a priority, leaving them exposed to potential breaches and compliance issues.
More and more companies are using AI for tasks such as management, learning analytics, and communications. But with limited budgets, you can’t afford to make mistakes in deploying these tools. Even small security mistakes can put sensitive staff and patient data at risk, disrupt daily operations, or cause broader network issues.
Practicing business and IT leaders play a critical role in shaping the perception of AI security and are working together to ensure that AI is seen as a key enabler for safe and responsible deployment, rather than a roadblock.
When patients access AI tools through patient portals or websites, they can inadvertently create security risks. Misuse of AI, inadvertent sharing of sensitive information, or engagement with insecure platforms can expose healthcare settings and patients to cyberthreats. IT and clinic leaders must prepare guidance and safeguards to help patients use AI as it relates to clinic or care management with the utmost care.
Encouragingly, 42% of IT decision makers report taking a proactive approach to AI security, incorporating AI into both development and strategic planning. This should include staff training to ensure employees understand how to use AI safely and act as the first line of defense against cyber threats. As cyber-attacks increase and AI becomes central to operations, the potential risks increase. Without proper safeguards, your organization can remain at risk.
AI is a technology that may already have some staff and stakeholders wary, and it brings new fears and concerns. At the same time, carrying out a thorough assessment before deploying a tool poses risks that healthcare organizations and trusts need to carefully plan for. IT leaders must work closely with practices to understand the nature of these threats and how they can impact both systems and data. Practical business leaders, on the other hand, need to ask the right questions, challenge assumptions, and factor security into every decision.

