Security engineers warn small businesses to hurry to adopt AI

AI For Business


Kyle Kellams: Small business owners are trying to do so safely while navigating the line of integrating artificial intelligence into operations.

Chris Wright, a partner and leading security engineer at Little Rock-based Sullivan Light Technologies, says it's fascinating to compete in the evolving landscape by adding AI.

Light: There's only one tonne of hype around AI. And I think everyone is scared of missing such a FOMO, the aspect of it. And people jump into it. When I see so many of the hype, I always look back and see who is the creator of the hype. And it is usually someone who is the CEO or owner of an AI startup company. And you know, they're investing something in it. So they hype with every breath, hoping something gets caught and hope that they will become billionaires from it.

But some of the things we are beginning to see now is its downturn. I'm constantly watching here in Arkansas – this morning I was watching Arkansas Business And there was an article telling all small businesses that they need to adopt AI or get left behind. And I quickly flipped over to some of my wider social feed and saw that the US Census reported dropping out of AI recruitment for companies of almost every size except one to four employees. So, anything beyond five employees will have a recession for months.

Kellams: So, how do companies of any size use it if they are too fast for AI? How are they trying to implement it?

Light: On the SME side, many of them use AI, large language model type prompts that generate many of them. I've seen these on ChatGpt, Claude, Microsoft Copilot, and more. I think that's probably the majority of small and medium-sized businesses.

There are bigger features that have attempts to integrate AI into existing tools. There is also a platform called Model Context Protocol, which allows AI to pipeline into some more epic processes. I think that large businesses are using it, or at least starting to use it a little more like that.

They are also looking at what their long-standing goal is: chatbots in their customers or sales pipelines. So, you know, when you jump into the website, there are some questions. We all see these. They pop up everywhere. Whether you're visiting the site for the first time or not, whether you're logged in to a site you use quite frequently, wait about 2 seconds, then the fake chatbot personality pops up and says, “Hey, can you help me with something?”

It has seen fairly limited success in that many people don't get what they need from them and they don't feel it. It doesn't have that personal type of feeling. Many of us have had those experiences with that little chatbot trying to guide you through subscriptions and more. And I'll admit, when I wasn't sure I ended up screaming at that chatbot. And someone may have lost a customer.

Kellams: This is one of the challenges of adopting AI early. Are there any other?

Light: Yeah. That's why there's a lot we're seeing right now. When they looked at IBM's costs for data breach reports, they have published this for many years. This is used as a guideline and something like “Why do we do what we do” when presenting reports to clients.

There are a few things we see with AI. First and foremost, the organization doesn't actually have those guardrails and control over them. Therefore, there are no policies, no procedures, and no methodology or education formally adopted for employees. How do you use this? Why don't we allow you to use it?

And that brings you end users who struggle to see the difference between home and business computing. All of our clients are small and medium-sized businesses. I come from a large business size. I had told my business partners before, but the smallest company I worked for before joining him was 6,000 people. It is still considered to be quite large. The biggest company – I always like to joke. It is the largest organization on the planet with endless employees and the US federal government.

So coming from that side, it's a bit troubling to me step into the small business world and see how people treat their business computers like computers at home. And then, “Yeah, hey, I want to use this AI platform, but we're not doing our job. I can't do my job, so I just do it.”

We've called that Shadow it before and now we call it Shadow Ai. Since the company has not implemented these controls, it brings in a platform that can be accessed. There is no policy control that says no. There is no technical control to prevent people from going out there. So someone in medical practice can throw a lot of patient data into chatgpt and say, “Hey, somehow, just put this together.” And in doing so, the employee has just created a data breaches, a HIPAA breaches that can be reported to the federal Department of Health and Human Services.

People do these kinds of things without really understanding because there are no structures or guardrails in place. Otherwise, businesses are almost always waiting for a dragster to rotate the wheels and a green light. They really are trying to recharge and get out there. And they implement these things without researching. They said, “OK, this is just another vendor.”

We investigate all other vendors. There will be a due diligence survey of vendors. We'll see how vendors protect the data they entrust to them. And for some reason, they slip through it and skip it. Because in this case it seems like something completely different.

Kellams: What should companies want to integrate AI into their business models?

Light: That's exactly what it is. Do not rotate the wheels while waiting for the drag racer to light green. First and foremost, there's no reason why you say to the cart before the horse, “I want AI.”

Understand what you want to use it for. There are very good use cases for this. However, many people want to have AI in it. They don't want AI to solve the problem. So, is there a problem that you are trying to solve what you think AI is trying to solve?

Then, when you define it and design its process and its technical aspects, you look for security gaps. It's not harmful to call someone like us and say, “Hey, can you help me talk about it?” I have been involved in cybersecurity since the early 2000s. We've seen many iterations of how we do things, and it follows a similar path.

Some people know what you're looking for work with through the process. Find where your security concerns are, where you need control, where you need guardrails. Put them in. Verify that the selected product or service protects your data as much as possible. Although there is no 100% security, you can reduce your risk by recognizing where that risk lies, mitigating controls, correcting if possible, and avoiding the risk when necessary.

Make sure that the risk management mindset is in the process. And when you're done with that, start slowly. Don't say, “I'm going to put my whole business into this process.” Select some examples. See what they do and how this works. If it gives and helps you to do more with the resources I have now, please measure and evaluate. Once everything is in place, start the process.

Kellams: Thank you very much, Chris.

Light: Yes, certainly. I'm happy to do that. Please let us know if you have any further questions.

Chris Wright is a partner and leading security engineer at Little Rock-based Sullivan Light Technologies. He is interested in answering your questions about business-related AI. You can send these questions to us at info@kuaf.edu and your questions may be answered in the next edition The whole Ozark.

Large transcript Ozarks will be created at Rush deadline. The Copy Editor uses AI tools to check your work. Kuaf does not publish content created by AI. Please reach out kuafinfo@uark.edu Report the problem. The audio version is an authoritative record of Kuaf programming.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *