OpenAI has released its latest AI model, GPT-4o, adding more features to the chatbot and improving some of the existing features. This new update may have come as a surprise to viewers watching the presentation, but it also had a big impact on OpenAI CEO Sam Altman. He claims he feels like he stepped out of a movie.
Sam Altman expressed surprise at GPT-4o's capabilities, likening it to “movie AI.” “The new audio (and video) mode is the best computing interface I've ever used,” Altman wrote on his blog. “It feels like the AI in the movies, but it's real.” I'm still a little surprised by that. It turned out to be a big change in human-level response time and expressiveness.”
He emphasized that while OpenAI's original vision was to use AI to directly generate profits, it has now evolved into a platform where users innovate for collective benefit. “We're a business, and we're going to have a lot of things to pay for, and that's going to help us (hopefully) provide free and great AI services to billions of people,” Altman said. I am.
He also highlighted the possibility of new audio and video modes. This made interacting with a computer feel truly natural to him for the first time.
GPT-4o
OpenAI's latest model, GPT-4o, integrates audio, video, and text in a way that aims to make interactions with AI more natural and intuitive. This model does more than just process text. Understand and respond to audio and visual input. These changes allow for more human-like interactions. Second, OpenAI has made strides in reducing response times, allowing GPT-4o to respond to queries in milliseconds, similar to the pace of a natural conversation.
Availability and impact
The rollout of GPT-4o begins with its availability on ChatGPT, freely accessible to premium users as well as the basic level. However, paid members are given five times the usage limit of free users.
