0

What is OpenAI’s new GPT-4o model? Free users get more features including advanced audio

OpenAI is launching a new flagship generative AI model called GPT-4o, which will be introduced “iteratively” across the company’s developer and consumer products in the coming weeks. There was speculation that a Search engine But CEO Sam Altman denied the rumors.

Muri Muratti, CTO of OpenAI, said that GPT-4o provides “GPT-4-level” intelligence, improving on GPT-4’s capabilities in text, vision, and now audio.

Murati emphasized the increasing complexity of these models and the goal of making interactions more natural and intuitive, saying, “We want the interaction experience to really be more natural, easier and not focused on the UI at all for you , but rather just focus on cooperation with [GPTs],

What features does GPT-4o have?

During a keynote speech at OpenAI’s offices, Murati explained, “GPT-4O impacts voice, text and vision. This is incredibly important as we look to the future of interactions between us and machines.”

The predecessor, GPT-4, was capable of processing both images and text, performing tasks such as extracting text from images or describing their content. GPT-4o extends these Speech inclusion functionalities,

changing significantly chatgpt experience, GPT-4o allows for more interactive and assistant-like interactions. Previously, ChatGPT included a voice mode that converted text to speech. Now, GPT-4o extends this feature, allowing users to interrupt ChatGPT during responses, offering the model “real-time” feedback. It can also detect emotional cues in the user’s voice and respond in different emotional tones.

GPT-4o also enhances the visualization capabilities of ChatGPT. Whether analyzing a photo or a computer screen, ChatGPT can now rapidly answer questions ranging from software code analysis to identifying clothing brands. The company is also releasing a desktop version of ChatGPT and introducing a new user interface.

Starting today, the new model is available in the free tier of ChatGPT and also for OpenAI’s ChatGPT Plus customers with a “5 times higher” message limit. OpenAI plans to introduce the new voice feature powered by GPT-4o to Plus users in alpha within the next month.

According to OpenAI, the model also has improved multilingual capabilities, with better performance in 50 different languages. In OpenAI’s API, GPT-4o operates twice as fast as its predecessor, especially GPT-4 Turbo, which costs half as much and offers a higher rate limit.

What new features are available to free ChatGPT users?

With the rollout of GPT-4o, ChatGPT Free users will experience an Suite of new features, which also includes GPT-4 level intelligence. Users will be able to get answers directly from the models, as well as access information taken from the web.

GPT-4o will also be capable of data analysis and visualization such as creating charts. People will also be able to use the chat function to talk about their photos, allowing users to join discussions or get information about the images they have uploaded. This model also helps users with more complex tasks such as summarizing documents, writing content, or uploading a file to help perform detailed analysis.

Finally, there is now a Memories feature, designed to create a more useful experience by remembering past interactions and context to deliver a more cohesive and personalized user journey.

Featured Image: Canva


what-is-openais-new-gpt-4o-model-free-users-get-more-features-including-advanced-audio