How to find the right business use case for generating AI

AI For Business


Generic AI can be useful for businesses, but this technology has some notable drawbacks, such as the tendency to make simple mistakes and occasionally difficulties with basic logic. With that in mind, how should organizations think about finding the right use cases to effectively utilize the generated AI for sustainable business advantages?

Professor MIT Sloan Profices during a webinar hosted by MIT Sloan Management Review We laid out a three-stage approach that helps companies identify the best generation AI use cases and automate some parts or all business processes. He also provided practical best practice advice to help organizations effectively realize the benefits of generated AI while avoiding common pitfalls.

“There are many issues you need to worry about when using [a large language model] … And there are no bulletproof solutions yet,” Ramakrishnan added that research institutions and the vendor community have made great strides in dealing with them.

Three Steps to Identifying Business Use Cases for LLMS

Ramakrishnan suggests that you take the following steps to determine which knowledge generates AI automation best delivers:

Split workflows and jobs into tasks. A job is a collection of discrete tasks that differ in how much it can be automated with the generation AI. For example, the U.S. Bureau of Labor Statistics Occupational Database associates 25 tasks with being a university professor, with only a few of which can be easily automated. While course materials and assignment preparation, student work scores, and pre-prepared lectures are partly automated tasks, classroom discussions and lecture mitigation do not translate well into LLM use cases. “That's why you need to go through trouble dividing your work into individual, individual tasks,” Ramakrishnan said. “There are times when it's easy with LLM, but other things can be really difficult.”

Evaluate tasks using the generated AI cost equation. It is important to consider all the potential costs associated with automation. There are obvious costs to using LLM, such as paying licenses and API fees. However, there are also more unimportant costs, such as the time, effort, and money needed to adapt generative AI tools to the degree of accuracy required for the task at hand, and to create mechanisms to detect and correct errors.

The cost of a task may vary based on how accurate the LLM is and whether the use case has a margin of error. There is a bit more room for error for some tasks, such as creating ad copy, product descriptions, and movie plotlines. Applications that require logical reasoning or knowledge of fact. It covers the relationship between causes and effects. Or, they need more accuracy because they have a high interest like medical care. These cases require a robust mechanism for monitoring and modifying LLM outputs. Often people in the loop. This adds considerable effort and potential costs, Ramakrishnan said. Another potential cost factor is added to the mix as errors can slip by human monitors and cause risk of brand damage and reputation.

Once such costs are identified, organizations must weigh the generation AI cost equations for the costs of doing business as normal (without generation AI) and decide which one is smaller. And given the pace of market change, what makes automation point now makes no sense could be automated more easily someday in the future.

“If you apply the equation to a specific task and it doesn't pass because the cost is too high, then you need to reconsider it regularly as the cost of adoption decreases significantly as LLM features steadily improve,” says Ramakrishnan.

Build, launch and evaluate pilots. If the first two conditions are met, the final step is to turn the experiment into an action. Companies can take a variety of approaches to pilots, including using application vendors, adapting commercial models such as GPT-4, and adapting open source LLMs such as Llama 3.

Software vendors are also rushing to inject generated AI into existing products, as evidenced by the rise of AI copilots, a trend that helps accelerate the deployment of generated AI.

According to Ramakrishnan, companies need to establish a rigorous evaluation process when building LLM-based applications, when building LLM-based applications.


Light bulb with abbreviations "ai" It seems to fly like a rocket ship

AI Executive Academy

Directly on MIT Sloan

Best Practices for Using LLM

According to Ramakrishnan, if companies take these three steps, they can follow some best practices and successfully implement generative AI.

  • Make sure you have a rigorous evaluation process when building or evaluating LLM-based applications.
  • Don't rush to production without a robust mechanism to check and fix errors. Putting humans in a loop can be costly, but catching issues before the tool is deployed or released to a customer is worth the cost.
  • Consider narrow use cases, especially if you run a small business. The target task requires a smaller LLM. This usually means less costly and easier to train and maintain.
  • Find and train talent outside of traditional data science organizations. According to Ramakrishnan, it is important to identify and nurture people across the ranks who are interested in generating AI and who will continually build their skill sets. “There's a talent hidden in the Enterprise,” he said. Using LLMS at the prompt does not require strong technical background.
  • Set ROI expectations by ensuring rapid recall and prioritizing obvious use cases that act as valuable learning exercises. Ramakrishnan noted that most organizations focus on business productivity in the first wave of LLM adoption.

“The way to get through that dichotomous, paralyzed state is to say we first do something low and do something simple and see what happens, but we're going to do a lot of them very quickly,” Ramakrishnan said.

Check out the webinar: Get recovery from the generated AI



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *