
On episode 105 of the AI/Hyperautomation Minute, Toni Witt explores what lies behind generative AI and its underlying technology, the Generative Pre-Trained Transformer (GPT) machine learning model, and how it evolves. clarifying.
This episode is sponsored by the Acceleration Economy’s Generative AI Digital Summit on May 25th. Registration for the event is free, with solutions like ChatGPT discussing the future of work, customer experience, data strategy, cybersecurity, and more. Sign up now to reserve your spot.
highlight
00:26 — There is a lot of discussion about generative AI, but non-technical people may still misunderstand the underlying technology and its evolution.
01:03 — Toni clarifies that ChatGPT is a web-based tool that provides access to the underlying machine learning model, GPT-3. GPT-3 is a word predictor. It is essentially a form of deep learning with capabilities that are a subset of what machine learning and AI can do.
01:37 — Machine learning started with prediction and classification. “Most AI applications that benefit businesses are these classification or predictive models,” explains Toni. The Netflix recommender algorithm is one example. It uses data from previous movies and shows you’ve liked in the past to recommend what to watch next.
02:12 — GPT-3 is a transformer model. “There’s a pretty big debate going on as to whether these transformer models will reach his AGI, or what we call Artificial General Intelligence, which basically matches the level of human intelligence,” Toni said. says.
02:57 — OpenAI CEO Sam Altman pointed out the tendency to have “base-level models.” The GPT series has already shown that the model is useful for training other models. “Think of it like a tech stack,” says Toni.
Looking for real-world insights on artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel.

