What you'll learn:
- Companies can realize the value of AI through small-scale efforts at three levels: improving individual productivity, incorporating AI into defined tasks, and automating production processes.
- Vanguard Group estimates the ROI of AI at nearly $500 million, with proven use cases such as call center support, personalized advisor summarization, and 25% improvement in programming productivity.
- Only 47% of business professionals say their AI policies reflect the realities of their work. Researchers recommend that front-line leaders create the rules while management sets the guardrails.
New insights from the MIT Sloan Management Review focus on achieving real results from your artificial intelligence investments. There are also words of caution about having the Secretariat dictate the rules for the use of AI.
Get big value from small efforts
When MIT Sloan Senior Lecturers Melissa Webster and George Westerman searched for examples of companies using generative AI to achieve significant transformation, they found none. What did they find? In the webinar, we explained that smart leaders can derive significant value from small AI efforts deployed through a careful, systematic approach that is deployed at three levels:
- Create a safe environment for individual employees to be productive. Common use cases include inbox management, meeting transcription, calendar optimization, and information session preparation. It's also important to apply different tones and cultural norms to business documents. This is useful, for example, if a European is speaking to an American.
- Incorporate generative AI into clearly defined tasks and roles. Developers can get help writing code, analyzing data, and creating documentation. AI agents can quickly answer common questions for sales and call center agents. Design teams can create proposals from a few lines of text and visualize their ideas in the room with clients.
- Bring automation to production and operational processes. AI can help marketing teams create entire campaigns, not just the content within them. Meanwhile, enterprise software packages can now automate processes from managing supply chains to identifying skill gaps within the workforce, all supported by conversational AI interfaces.
To advance strategy, leaders must balance immediate action with long-term thinking. This is to, as one AI director told Webster and Westerman, “get your footing.” Along the way, it's important to align AI efforts with core business functions. Otherwise, pilot projects may be doomed to remain on the sidelines.
Watch “Scaling Generative AI: Get Big Value from Small Efforts”
Ensuring AI delivers results
Vanguard Group estimates the ROI from AI to be close to $500 million. Using AI, asset management companies have been able to improve the efficiency of their contact center employees and advisors. This allows Vanguard to expand access to human support for investors and expand digital advisory services for customers investing from as little as $100.
MIT Initiative on the Digital Economy columnists Thomas H. Davenport and Randy Bean outline a successful AI investment for Vanguard:
- AI agents in call centers help agents pull answers from internal content and resolve issues faster.
- Auto-generated, personalized summaries of Vanguard market perspectives help advisors keep their clients informed.
- AI-assisted code generation increased programming productivity by 25% and shortened system development lifecycle by 15%.
- A large language model analyzes company earnings for signals of dividend cuts.
Among these proven use cases are dozens of pilots that IT leaders won't deploy at scale “until the problem is resolved,” Davenport and Bean write. In the meantime, Vanguard will continue to monitor the performance and utilization of your AI models. The latter is the company's pride. 50% of our employees have completed training through Vanguard AI Academy.
Read “Return on your AI investment at Vanguard”
Lead an AI-driven organization
Face-to-face at MIT Sloan
Register now
Understand how the LLM works
Knowing how large-scale language models work is “a necessary foundation for making sound business decisions about the use of AI technology within an enterprise,” writes Rama Ramakrishnan, an MIT Sloan professor who specializes in the practice. In a recent article, he provides answers to the 10 most common questions executives have about AI. among them:
- A model can answer questions about what happened after the end date of the training dataset only if it has access to live data.
- If you upload a document with a prompt, there is no guarantee that the answer will be limited to that document, even if you make that request. The model continues to reference similar documents from the training set.
- Modern models have context windows large enough to hold entire books, but performance can suffer if prompts contain too much or irrelevant information. The model also “tends to focus on the beginning and end of a prompt, potentially missing important information along the way.”
- Hallucinations cannot be eliminated. To avoid these, consider using a second model to validate the first output, or focusing on structured tasks and data formats that are easier to validate.
Read “How LLM Works: Top 10 Executive Level Questions”
Have team leaders create AI rules
When companies adapted to the Internet, they didn't create Internet departments or ask employees for approval to launch Web sites. The same should be true for AI, according to MIT Sloan Senior Lecturer Robert C. Pozen and Gentreo CEO Renee Fry. Executives need to build guardrails, but individual teams need to define the rules for using AI.
At the moment, nothing like that is happening. Only 47% of business professionals surveyed by the authors said their AI policies reflected the realities of their work. This is important because different departments use AI differently. As the authors say, “judgments are local,” and it should be up to front-line leaders to translate broad corporate policies into concrete practices. When rules don’t apply to employees’ daily work, they’ll either resort to back channels that put the company at risk or ignore the AI tools the company has invested in.
Executives need to realize that “decentralization is not abdication,” Posen and Frye write. Leaders still need to define policies around privacy, security, intellectual property, and ethics. They will also continue to be responsible for building the AI platform and training programs. From there, it's up to administrators to decide where, when, and how to implement AI.
Read “Let team leaders create the rules to improve AI productivity”
