For the past 30 years, the dream of being able to collect, manage, and leverage an organization’s knowledge assets has never truly come true. Systems for sharing information assets across the enterprise have been highly evolved, but have not been able to take them to the next level by effectively transforming the information residing in digital files into usable knowledge. Data resides in larger and larger silos, but the real knowledge still resides with employees.
However, with the rise of Large Language Models (LLM), true Knowledge Management (KM) is beginning to become a reality. These models can extract meaning from digital data at scale and speed beyond the capabilities of human analysts. According to his State of the CIO Survey 2023, 71% of his CIO respondents expect to be more involved in business strategy over the next three years, and 85% will focus more on digital and innovation. It is said that Applying LLM to an organization’s knowledge assets can accelerate these trends.
less is better
OpenAI’s ChatGPT and Dall-E 2 Generative AI (GenAI) models have revolutionized the way we think about AI and what it can do. From writing poetry to creating images, it’s amazing how computers can create new content from a few simple prompts. However, the scale of LLMs used to perform these tasks is enormous and expensive for OpenAI to offer. GPT-4 was trained on over 45 terabytes of text data over 1,000 GPUs over 34 days, costing about $5 million in computational power. In 2022, OpenAI lost $540 million despite raising $11.3 billion in a funding round.
Clearly, these costs and the scale of their operations are beyond the capabilities of most organizations wishing to develop their own LLM. But for many companies, the future of AI lies in building and adapting smaller models based on their internal data assets. Rather than relying on company-provided APIs such as OpenAI and the risk of uploading potentially sensitive data to third-party servers, new approaches now allow companies to bring small LLMs in-house. rice field. Tuning the parameters of LLM models, new languages such as Mojo, and AI programming frameworks such as PyTorch can significantly reduce the computing resources and time required to run AI programs.
better open
Just as the web is built on open source software and protocols, many enterprise AI initiatives will likely be built on open source models such as LLaMA and freely available technologies such as LoRa. I have. According to a recently leaked Google memo,
“Barriers to entry for training and experimentation have been lowered from the total output of major research organizations to one person, nights, and bulky laptops.”
These barriers to entry will be even lower, the results will be better, and start-ups and companies will be able to build niche models focused on their specific business and workflow needs.
From GenAI to SynthAI
At the heart of these developments range from AI systems that create new content based on simple prompts, to models trained on a company’s internal data and programmed to generate useful insights and recommendations. is the transition to
LLMs such as ChatGPT often produce reliable results, but it is not clear how the data fed into the model was used and whether the answers the model gives are true or hallucinatory. A recent case in which a New York attorney used her ChatGPT to prepare court filings, listing presumably historical cases to support her client’s claims, raises the dangers of relying on GenAI’s output. showed gender. Despite what appears to be genuine evidence, six of his listed cases were never actually committed.
Silicon Valley venture capital firm A16Z recently outlined its belief that the future of AI in the workplace is not necessarily LLMs like ChatGPT, but more focused models designed to address specific business needs. Did. They call it SynthAI, and models are trained on their own datasets and optimized for specific purposes, such as solving customer support issues, summarizing market research findings, or creating personalized marketing emails. I’m here.
Applying the SynthAI approach to better manage enterprise data assets is a natural evolution of the next phase of the AI revolution. Consulting firm BCG employs this approach for its 50-year archive, data gathered primarily from reports, presentations, surveys and client engagements. Previously, employees could only search these files with a keyword search and read each document for relevance. The system now provides useful answers to questions.
The dream of knowledge management is becoming a reality.