AI is a hot topic, and IBM puts it at the center of its hybrid cloud strategy at the annual IBM Think conference. Over the past few years, while other companies have focused on the consumer side of new AI applications, IBM has developed a new generation of models that can better serve enterprise customers.
IBM announces watsonx.ai, an AI development platform for hybrid cloud applications. IBM watsonx AI development services are in technology preview and are expected to be generally available in Q3 2023.
This new generation of AI is designed to be a critical business tool that enables a new era of productivity, creativity and value creation. But for enterprises, it’s not just cloud access to this new class of AI constructs, commonly referred to as Large Language Models (LLMs). LLM forms the basis of generative AI products such as ChatGPT, but enterprises have many issues to consider, including data sovereignty, privacy, security, reliability (no drift), accuracy, and bias.
An IBM survey of businesses found that between 30% and 40% see business value in AI, a doubling since 2017. One forecast referenced by IBM says AI will contribute $16 trillion to the global economy by 2030. While the study calculates AI-enabled productivity gains, just as no one could have predicted the unique future value of the early Internet, beyond productivity gains You can create your own value. By improving productivity, AI fills many of the gaps between enterprise skill requirements and the people with those skills.
Today, AI is already improving software programming by making it faster and more error-free. Red Hat makes writing code easier with IBM’s Watson Code Assistant, powered by watsonx, by predicting and suggesting the next code segment you’ll type. The application of this AI is highly efficient because it targets the specific programming model of Red Hat Ansible Automation Platform. The Ansible code assistant is 35x smaller than other more popular code assistants as it is more restricted and optimized.
Another example is SAP. SAP has incorporated Watson service processing to power the digital assistant in SAP Start. New AI capabilities in SAP Start help improve user productivity through both natural language capabilities and predictive insights using IBM Watson AI solutions. SAP has found that up to 94% of queries can be answered by AI.
Bringing Watson to Life
The IBM AI development stack has three parts: watsonx.ai, watsonx.data, and watsonx.governance. The watsonx components are designed to work together and can also work with third-party integrations such as HuggingFace’s open source AI models. And watsonx can run on multiple cloud services such as IBM Cloud, AWS, and Azure, as well as on-premises servers.
The watsonx platform is delivered as a service and supports hybrid cloud deployments. These tools enable data scientists to rapidly engineer and tune custom AI models. The model then becomes the critical engine for the company’s business processes.
The watsonx.data service allows you to connect data from multiple sources to the rest of watsonx using Open Table Store. Manage the lifecycle of data used to train Watsonx models.
The watsonx.governance service is used to manage the model lifecycle, applying active governance to models as they are trained and refined on new data.
The heart of the product is watsonx.ai, where the development work takes place. IBM itself is currently developing 20 base models (FMs) with different architectures, modalities and sizes. In addition to these, there is the HuggingFace open source model available on the watsonx platform. IBM expects some customers to develop their own applications, but IBM offers consulting to help select the right model, retrain on customer data, and accelerate development if needed. It offers.
Over three years of research have gone into the development of the watsonx platform. Before releasing watsonx, IBM built its own AI supercomputer named “Vela” to study effective system architectures for building FMs (see article link below) and built its own built a model library for IBM acted as its own “Client 0” for the AI Platform.
The Vela architecture is easier and cheaper to build than traditional AI supercomputers that use standard Ethernet networking switches (no expensive Nvidia/Mellanox switches) and can be replicated if clients want to run Watsonx on-premises. It could be easier. PyTorch has also been optimized for the IBM Vela AI supercomputer architecture. IBM found a performance overhead of only 5% when running virtualization on Vela.
IBM’s watsonx supports IBM’s commitment to a hybrid cloud strategy running on Red Hat OpenShift. The watsonx AI development platform runs on the IBM Cloud, other public clouds such as AWS, or on customer premises, so companies can use this modern AI technology is available. . IBM is truly combining cutting-edge AI and hybrid cloud with watsonx.
To clarify the naming convention, watsonx is IBM’s AI development and data platform for delivering AI at scale. Watson branded products are digital labor products with AI expertise. Other Watson branded products include Watson Assistant, Watson Orchestrate, Watson Discovery, and Watson Code Assistant (formerly Project Wisdom). IBM is putting more emphasis on the Watson brand. The company deployed the product, formerly known as Watson Studio, to his watsonx.ai to support the development of new underlying models and access to traditional machine learning capabilities.
FMs and LLMs
Over the past decade, deep learning models have been trained on large piles of labeled data for each application. This approach was not scalable. FMs and LLMs are trained on large amounts of unlabeled data that are easier to collect. You can perform multiple tasks using these new underlying foundational models.
Using the term “LLM” is actually a misnomer for this new class of AI that leverages pre-trained models to perform multiple tasks. The use of “language” in this term implies that the technique is only suitable for testing, whereas models consist of code, graphics, chemical reactions, etc. A more descriptive term that IBM uses for these large pretrained models is a foundation model. In FM, a large dataset is trained to produce a specific model. This FM can be used as is or adjusted for a specific application. By tuning FM to your application, you can also set appropriate limits and directly make the model more useful. FM can also be used to accelerate the pace of non-generative AI applications such as data classification and filtering.
Many LLMs are big, and they’re getting bigger as they try to train on any kind of data and make it usable for any open domain task. In an enterprise environment, this approach is often overkill and can lead to scaling issues (see article link below). Proper selection of the right dataset and applying it to the right type of model can lead to a more efficient final model. This new model can also use IBM’s watsonx.governance to remove bias, copyrighted material, etc.
At some point during IBM Think, AI was said to be in a “Netscape moment.” This can be likened to a tipping point when far more users are exposed to the capabilities of the Internet. ChatGPT exposed generative AI to a wider audience. But there is still a need for responsible AI that businesses can trust and control.
And as Dario Gil said in his closing keynote, “Don’t outsource your AI strategy to API calls.” The same sentiment was echoed by CEO Hugface. “Own your model. Own your model.” Don’t borrow other people’s models. IBM gives enterprises the tools to build responsible, efficient AI and own the models.
Tirias Research tracks and consults companies across the electronics ecosystem, from semiconductors to systems and sensors to the cloud. Members of the Tirias Research team have consulted for IBM and others across the server, AI, and quantum ecosystems.
follow me twitter.