OpenAI held its highly anticipated Spring Update event on Monday, announcing a new desktop app for ChatGPT, minor user interface changes to ChatGPT's web client, and a new flagship-level artificial intelligence (AI) model called GPT-4o. did. The event was streamed online on YouTube and was held in front of a small live audience. During the event, the AI company also announced that all of GPT-4's features, which were previously only available to premium users, will now be available to everyone for free.
Updates to OpenAI's ChatGPT desktop app and interface
OpenAI Chief Technology Officer Mira Murati kicked off the event and launched the new ChatGPT desktop app. This app is equipped with computer vision and can see the user's screen. Users can turn this feature on or off, and the AI will analyze what it sees to help. The CTO also revealed that the interface of his web version of ChatGPT has undergone minor updates. The new He UI has a minimal look and a suggestion card is displayed when a user visits her website. The icons are also smaller and the entire side panel is hidden, leaving more of the screen available for conversation. In particular, ChatGPT can now access his web browser and provide real-time search results.
Features of GPT-4o
The main highlight of the OpenAI event was the company's latest flagship-grade AI model called GPT-4o. “o” stands for omni model. Murati highlighted that the new chatbot is 2x faster, 50% cheaper, and has 5x higher rate limits compared to his GPT-4 Turbo model.
GPT-4o also significantly improves response latency and can generate real-time responses even in voice mode. In a live demo of the AI model, OpenAI demonstrated its ability to converse and react to users in real time. Powered by GPT-4o, ChatGPT can now also be interrupted to answer other questions, which was not possible before. But the biggest enhancement to the announced models is the inclusion of emotional voices.
When ChatGPT speaks, its responses include various voice modulations to make it sound less robotic and more human. The demo showed that the AI can also recognize and react to human emotions in audio. For example, when a user speaks in a panicked voice, the user speaks in a worried voice.
Improvements have also been made to computer vision, allowing it to process and respond to live video feeds from the device's camera based on live demos. Watch users solve equations and provide step-by-step guidance. If the user makes a mistake, it can also be corrected in real time. Similarly, you can now process large scale coding data, instantly analyze it, and share suggestions for improvement. Finally, users can now open the camera and speak with their faces visible, allowing AI to detect their emotions.
Finally, another live demo highlighted that ChatGPT, powered by the latest AI models, performs live voice translation and can even speak in multiple languages in quick succession. OpenAI did not mention a subscription price for accessing the GPT-4o model, but emphasized that it will be rolled out and available as an API in the coming weeks.
GPT-4 now available for free
Apart from all the new releases, OpenAI has also made the GPT-4 AI model (including its features) available for free. Users on the platform's free tier can access GPT (mini-chatbots designed for specific use cases), the GPT Store, and AI that remembers users and specific information related to them for future conversations. You can access features such as memory functions. , and its advanced data analysis is available at no charge.
