ChatGPT is not an AI strategy

Machine Learning


Since its launch in December 2022, ChatGPT, along with Google Bard and other Large Language Models (LLMs), has been the subject of articles in the most prominent publications and television broadcasts, generating millions of posts and discussions worldwide. It accumulated and sparked a global debate overnight. It is central to the sales and investment strategies of many of the world’s largest organizations.

Employees, shareholders, customers, and partners expect organizational leaders to answer the question, “What is your AI strategy?” What is the ChatGPT strategy and what does this mean for your employees?

This is a pivotal moment for leadership. Approaches that worked when creating digital and data strategies will not work this time, given the deeper questions and media attention this technology has raised.

ChatGPT is a powerful tool, within the context of a marketplace imagined as a chessboard, it’s like a piece that can be promoted to one of the most powerful pieces on the board, but it works with the rest of the pieces. only if you do. .

LLM is just one part on the board

Strategy for the future of the organization requires an understanding of the LLM’s competence as a member of the board, which is rooted in issues of authority.

In layman’s terms, these language models take prompts like “create an AI strategy” and provide answers based on large amounts of data that are surprisingly compelling at first glance.

But at first glance, they extract information that already exists and reconstruct it based on what they “think” the answer should be. They don’t have the authority to tell the actual answer by themselves.

If a researcher publishes a paper based on years of technical research, and a student with no technical experience summarizes the paper in five bullet points, the summary may be an accurate restatement of the underlying paper. but students may not know if it is accurate. Alternatively, you can answer subsequent questions without retroactively citing research that might answer your question.

The images in this article are a good example. It was generated by DALL E2 based on the prompt “Photo of ornately carved pewter chess on a chessboard in front of a window at sunrise”. The image produced certainly looks like chess placed on a chessboard, but even if you are not an expert, anyone who has ever learned how to play chess can see his three on the board. You can quickly see that there can’t be a king of

Real-world applications where LLM can be applied hold human rights, such as systems that allow experts to interact with archived organizational knowledge. For example, if a network engineer could describe a particular file that she knew existed but forgot its name and location, LLM could help her provide far more accurate recommendations than her previous system. increase.

A key factor in the successful application of these models is that LLMs act as facilitators for professionals to navigate and generate information and continue to exercise authority over whether something is accurate and true. that humans have.

the remaining parts

LLM is just one kind of element on the board, along with deep learning, reinforcement learning, autonomous artificial intelligence, machine learning, sentiment analysis, and more.

Ironically, many of the other pieces on the board are more readily available and have more practical applications than LLM, even though fewer people are familiar with it.

For example, some companies have developed autonomous artificial intelligence systems that control machines for which no historical data exists. To fill the gaps in historical data, environment and machine simulations were conducted, combined with a human-generated curriculum to operate the machines, and deep reinforcement learning was leveraged to allow the system to create its own data through simulated experiences. bottom. What to do and what not to do to better control the machine.


Another strong element of the commission is the application of real-time artificial intelligence to streaming data. This allows organizations to apply algorithms in nightly or weekly batches, or even manually, to move to intelligence and learning that is applied in the moment.

This kind of application has strong economic potential, but it is not well known because it is not accessible to everyone on their home laptops or phones, and leaders are looking for short-term value signals in the noise. at risk of being missed. .

Autonomous AI, real-time AI, and generative AI all have valuable applications, and the most compelling applications can combine them to create exponential value. For example, when a customer calls a customer support center, real-time AI can sentiment-analyze the customer’s voice and transcribe the conversation into text. Until recently, it was used to perform searches and recommendations for supporting knowledge articles. Customer care agents resolve customer concerns within minutes.

Adding generative AI to this situation means that the transcribed customer voice can be used as a prompt to infer intent and generate more accurate and recommended responses to customer challenges in seconds. Human authority can be maintained by embedding underlying knowledge articles beneath the generated text for customer care agents to validate the generated response.

In a sea of ​​change, with AI pieces receiving varying degrees of investment and valuation, the leaders who create the most value for their customers and their organization will look to the board as a whole to see the value of each piece and the overall You will become a person who can understand without losing sight of the image. A broader strategy in favor of quick tactics.

Strategy cannot precede vision

The answer to the question of an AI strategy that makes the most of every piece on the board starts with vision. What is the future envisioned for the organization? What is the expected and desired future of the market?

The inevitable answer that many people come up with is to study trends or collect data. What does Gartner or IDC say the future holds?

These resources and practices are valuable and valuable, but the responsibility of setting the organization’s vision for the future cannot be outsourced. Nor should it react to hypothetical trends that someone envisions based on investments made by other organizations. .

Leaders must start with the difficult but essential question of what kind of future they want to create for their employees, partners, and customers, and work backwards to the present as a starting point. In this process, LLM and other technologies act not as a foundation for strategy, but as a powerful tool to enable strategy and clarify what investments need to be made to create that future. .

Learn more about DataStax here.

About Brian Evergreen

data stacks

Brian Evergreen advises Fortune 500 executives on artificial intelligence strategy.he is the author of Book “Autonomous Transformation: Creating a More Human Future in the Age of Artificial Intelligence”is also the founder of The Profitable Good Company, a leadership advisory that partners with and develops leaders to create a more human future in the age of AI.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *