CISOS: Do not block AI.

Applications of AI


The introduction of Generating AI (Genai) tools such as ChatGpt, Claude, and Copilot has created new opportunities for efficiency and innovation, but also creates new risks. For organizations that already manage sensitive data, compliance obligations, and complex threat situations, there is no rush to adoption without thoughtful risk assessment and policy adjustments.

Like other new technologies, the first step is to understand the intended and unintentional use of Genai and assess both its advantages and disadvantages. This means resisting the urge to adopt AI tools simply because they are popular. Risks should facilitate implementation – not the other way around.

Organizations often assume that Genai needs a whole new policy. In most cases, this is not necessary. A better approach is to extend existing frameworks such as acceptable usage policies, data classification schemes, and ISMS documents located in ISO 27001 to address genai-specific scenarios. Adding layers of disconnected policies can confuse staff and lead to policy fatigue. Instead, they integrate Genai risks into tools and procedures that employees already understand.

The main blind spot is input security. Many people focus on whether the output generated by AI is virtually accurate or biased, but they overlook more pressing risks. This is what the staff has entered into public LLM. The prompts include sensitive details such as internal project names, client data, financial metrics, and even credentials. If an employee does not send this information to an external contractor, they should not supply it to publicly available AI systems.

It is also important to distinguish between different types of AI. Not all risks are created equally. The risk of using facial recognition in surveillance differs from having developer teams access to open source Genai models. Putting these together under a single AI policy can over-reply risky situations, resulting in unnecessary control and, worse, blind spots.

There are five core risks that cybersecurity teams need to address.

Careless data leaks: Through the use of public genai tools or misunderstood internal systems.

Data addiction: Malicious input that affects AI models or internal decisions.

Overlasting on AI output: Especially when staff cannot check the accuracy.

Rapid injection and social engineering: Use AI systems to remove data and interact with users.

Policy Vacuum: If AI usage is being done informally without monitoring or escalation paths.

Addressing these risks is not just a matter of technology. It needs to focus on people. Education is essential. Staff need to understand what genai is, how it works and where it is wrong. Role-specific training for developers, HR teams and marketing staff can significantly reduce misuse and build a culture of critical thinking.

The policy should also clearly outline acceptable use. For example, it's okay to use coding help using CHATGPT, but can't you create a client communication? Can I use AI to summarise the board portions, or is it off limits? Clear boundaries combined with feedback loops that allow users to flag or clarify issues are key to continuous security.

Finally, the use of genai must be based on a cyber strategy. It's easy to get caught up in AI hype, but leaders need to start with the problem they're solving, not the tools. If AI makes sense as part of that solution, it can be safely and responsibly integrated into existing frameworks.

The goal is not to block AI. It's about opening your eyes and adopting through structured risk assessment, policy integration, user education and continuous improvement.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *