Danish Kapoor
Danish Kapoor

OpenAI releases GPT-4o model: free to ChatGPT users

OpenAI announced its new model, GPT-4o, which revolutionizes the field of artificial intelligence. This model is the latest iteration of the GPT-4 series, which powers the company's flagship product, ChatGPT. OpenAI CTO Mira Murati stated that this update is “much faster” and “provides improvements in text, image and audio capabilities.”

GPT-4o's capabilities will be “deployed iteratively,” according to the company's blog post. This deployment process will begin with red team access starting today, but text and visual capabilities will be available immediately in ChatGPT. This process allows users to immediately experience the new features of the model.

OpenAI CEO Sam Altman stated that the model is “inherently multimodal.” This means the model can generate content or understand commands in voice, text or images. Developers will have access to the GPT-4o API, which is half the price and twice the speed of GPT-4-turbo.

The new model accepts any combination of text, audio and image as input and can produce output in all three formats. It can also recognize emotion, allowing you to interrupt mid-conversation, and reacts almost as quickly as a human during conversations.

“The thing about GPT-4o is that it provides GPT-4 level intelligence to everyone, including our free users,” OpenAI CTO Mira Murati said during a livestreamed presentation. said. “For the first time, we are taking a big step forward when it comes to ease of use.”

During the presentation, OpenAI demonstrated live translation of GPT-4o between English and Italian, helping a researcher solve a linear equation on paper in real time and providing guidance to another OpenAI executive on deep breathing simply by listening to his breath.

The “o” in GPT-4o stands for “omni,” a reference to the model's multi-mode capabilities. OpenAI said GPT-4o is trained on text, images and audio, which means all input and output are processed by the same neural network. This differs from the company's previous models, the GPT-3.5 and GPT-4. These models allowed users to ask questions simply by speaking, then convert the speech to text. This took away tone and emotion and slowed down interactions.

Various reports leading up to GPT-4o's launch had predicted that OpenAI would announce an AI search engine rivaling Google and Perplexity, a voice assistant built into GPT-4, or an entirely new and improved model, GPT-5. However, OpenAI managed to make this launch just before Google I/O, the most important conference of the technology giant Google.

The strategic launch timing of OpenAI reveals the intensity of competition in the market and the race for innovation among technology companies. Various AI products are expected to be launched at the Google I/O conference. This was considered a window of opportunity for OpenAI.

GPT-4o model will be offered free of charge to all users

OpenAI's GPT-4o model will be available free of charge to all users. However, paid users will have five times the capacity limit compared to free users. This provides a significant advantage, especially for applications requiring intensive use.

This new model from OpenAI represents a new era in the development of artificial intelligence-based applications. The speed and multi-mode features offered by GPT-4o will enable users to use artificial intelligence more efficiently and effectively. This means a big win for both developers and end users.

All in all, OpenAI's GPT-4o model can be seen as a big step forward for ChatGPT users. With this model, innovations in artificial intelligence technology will reach a wider user base. This move by OpenAI is considered a positive sign for the future of artificial intelligence.

Danish Kapoor