Be honest: does AI align with your company’s values?

AI For Business


“When you hire someone new, you hire them for their skills,” she says, “but when you onboard them, you explain the company culture and how things work so they're operating within their understanding. That's LLM onboarding, and it's crucial for any organization or company.” To have a meaningful impact on a model, fine-tuning requires a dataset that's 0.5% to 1% the size of the model's original dataset, she says.

GPT 4 reportedly has over 1 trillion parameters, so even 1% is a big amount, but companies don't need to consider the entire dataset when fine-tuning.

“You can't write 10 questions and answers, tweak your model and then claim that it's perfectly aligned with your organization's values,” says Iragavarapu. “But you don't need to tweak everything either; just tweak certain business processes or culture. It's important to dive deep into one small area or concept, rather than addressing the entire LLM.”

With the right tweaks, you can overcome the model's core alignment, she says, and to find out if the tweaks have worked, you need to test the LLM on a large number of questions, asking the same thing in different ways.

As of now there is no good way to automate this, or an open source LLM specifically designed to test alignments of other models, but there is definitely a great need for it.

As simple Q&A use cases evolve into autonomous, AI-powered agents, this kind of testing becomes an absolute necessity. “Every organization needs this tool now,” says Iragavarapu.

Vendor Lock-in

When companies have no choice but to use a specific AI vendor, maintaining consistency becomes a constant battle.

“If it's built into Windows, for example, you might not have that control,” says Globant's Lopez Murphy. But if you can easily switch to a different vendor, an open source project, or a home-grown LLM, that makes things much easier. Having the options keeps the provider honest and puts the power back in the hands of the corporate buyer. Globant itself has an integration layer, an AI middleware, that makes it easy to switch models. “It could be a commercial LLM,” he says. “Or something you have locally, or [AWS] bedrock.”

And some organizations are rolling out their own models. “That’s why some governments want to have their own sovereign AI, so they’re not reliant on the sensibilities of Silicon Valley companies,” Lopez-Murphy says.

Governments aren't the only ones that need a high degree of control over the AI ​​they use. For example, Blue Cross Blue Shield Michigan has high-risk AI use cases, including cybersecurity, contract analysis, and answering questions about member benefits. These are highly sensitive and regulated areas, so the company built its AI systems in-house, in a dedicated, secure, managed cloud environment.

“We do everything in-house,” Fandrich said, “training and controlling the models in a private segment of the network, and then deciding if and how to put them into production.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *