5 Key Learnings About AI and ChatGPT in the Enterprise

Applications of AI

If 2022 was the year AI broke through and became a technology that changes society, 2023 will be the year AI breaks through in the enterprise. In other words, Generative AI and Large Language Models (LLM) have become part of his IT department lexicon around the world. CIOs are more likely to see ChatGPT names than Kubernetes.

It’s already been a year of big strides for AI in the enterprise. Here are five things I’ve learned from him so far.

1. ChatGPT is not the only option for businesses

In March, OpenAI announced the enterprise version of ChatGPT. However, in this case, OpenAI was not the first to enter the market and quickly followed suit. Cohere, a Toronto-based company with close ties to Google, was already selling generative AI to companies, as he discovered when talking to them earlier that month.

Cohere CEO Martin Kon said: He added that the approach is fundamentally different from his OpenAI’s. “OpenAI wants you to bring your data to Azure-only models. Cohere wants to bring our models to your data in whatever environment you feel comfortable with.”

So far, companies have mostly used generative AI to create semantic search engines for their private data. A related use case is knowledge management (KM). Imagine your employees having a conversation with an AI trained on a large language model based on your company’s data.

One of the many new AI companies looking to achieve KM in the form of chatbots is Vectara. Amr Awadallah, CEO of the company, said: It’s about what we’re trying to achieve, how we express what we’re trying to do. ”

2. Big tech platforms are pushing low-code tools

Last month, Google Cloud and Google’s Workspace division announced a number of new AI capabilities. These included the Generative AI App Builder, which “enables organizations to build their own AI-powered chat interfaces and digital assistants,” and new Generative AI capabilities in Google Workspace.

Google has coined the typically awkward term “gen app” for applications powered by generative AI. gen apps claim to become the third major internet application category after web and mobile apps.

I can assure you that the term Gen Apps will never catch on, but Google’s AI-powered tools will be well-utilized in the enterprise anyway.

Similarly, Microsoft is releasing new AI tools such as the Semantic Kernel (SK). It is described as “an open-source project that helps developers integrate state-of-the-art AI models into their apps quickly and easily.” SK is in many ways a classic Microsoft low-code tool, but only focused on helping the user “prompt” his AI chatbot.

3. LLMs vary widely in size, but size isn’t everything

A scan of Stanford University’s HELM website, which measures LLM in a variety of ways, reveals that the sizes of these models vary widely. Simply put, there is a trade-off between model size and speed of operation.

OpenAI has several models, varying in size from 1.3 billion parameters (the company’s Babbage model) to 175B parameters (the company’s DaVinci model).

Cohere differentiates its model sizes like Starbucks cups: small, medium, large, and extra large.

coheer model

List of Cohere models in the Stanford HELM directory

Open AI model

Key models of OpenAI

However, Stanford University also measures “accuracy”, and size doesn’t seem to matter much in these statistics.


Accuracy testing of ML models with Stanford HELM

4. Tools like Ray help scale AI

In this new era of generative AI, frameworks like Ray are as important as Kubernetes for building modern applications at scale. An open source platform called Ray provides a distributed machine learning framework. It is used by both OpenAI and Cohere to help train models. It is also used by other highly scaled products such as Uber.

The company behind Ray is Anyscale, whose CEO Robert Nishihara told me earlier this year that it is very developer-centric. All of Ray’s features are designed to be developer-friendly, he said. He noted that this is different from his Kubernetes user experience, which is notoriously difficult for developers. Ray was designed for his Python developer, the leading programming language used in AI systems.

Anyscale co-founder Ion Stoica says Ray is “like an extension of Python” and, like Python, has a set of Ray libraries aimed at different use cases. His RLlib with a difficult name is for reinforcement learning, but there are similar libraries for training, serving, data preprocessing, etc.

5. Business Intelligence will be reinvented in the AI ​​era

Generative AI is the catalyst for a new wave of data intelligence companies, just as cloud computing has ushered in a plethora of “big data” solutions.

I recently spoke with Aaron Kalb, co-founder of Alation, which calls itself a “data intelligence” platform and promotes a concept called a “data catalog.” It combines “machine learning and human curation” to create a custom store of data for the enterprise.

Kalb says that both AI and the popular corporate acronym BI (Business Intelligence) are “garbage in, garbage out.” He said data intelligence is “the layer that precedes AI and BI, helping us find, understand, and trust the right data to put into AI and BI.”

In this context, he said, bringing something like ChatGPT from the public internet into a company would be very dangerous.he thinks the data should be, well, more intellectual Before being used by AI systems within the enterprise. And he doesn’t think enterprises will ever need the “Internet scale” of ChatGPT or similar systems. Every organization has its own terminology, he explained. It can be industry jargon or specific to the company.

Conclusion: The AI ​​Enterprise Has Arrived

hard to believe it was exactly one year Since the advent of generative AI. It all started with his DALL-E 2 in OpenAI, which was announced last April and launched as a private beta in July. DALL-E 2, an image generation service powered by deep learning models, was a big step forward for the industry. Also last July, a company called Midjourney released its eponymous text-to-image generator.

But the AI ​​hype really skyrocketed in August. This is due to the release of Stable Diffusion, another of his deep learning text-to-image generators. Unlike DALL-E 2 and Midjourney, Stable Diffusion’s licensing structure was generous. This is around the time people started looking at the model behind all these services: LLM. DALL-E 2 used a version of GPT-3, OpenAI’s main LLM at the time.

But in hindsight, image generators were just an appetizer. At the end of November, OpenAI launched ChatGPT, a chatbot built on top of GPT-3.5. This was the catalyst for generative AI to enter the enterprise and has rapidly dominated IT departments ever since.

group Created by sketch.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *