OpenAI, the trailblazer in AI technology backed by Microsoft, has unveiled a new, economical AI model: the GPT-4o Mini. Announced on Thursday, this smaller version of their renowned AI models is designed to be both affordable and less energy-intensive, targeting a wider array of customers.
At a time when competition in the AI sector is intensifying, with major players like Meta and Google striving to capture a larger market share, OpenAI’s GPT-4o Mini comes as a strategic move to facilitate quicker and more cost-effective AI application development for programmers.
Priced competitively at 15 cents per million input tokens and 60 cents per million output tokens, the GPT-4o Mini is positioned as over 60% cheaper than its predecessor, the GPT-3.5 Turbo. This pricing strategy not only makes it more accessible but also reflects OpenAI’s commitment to democratizing AI technology.
Moreover, the GPT-4o Mini boasts superior performance in chat applications and has achieved an impressive 82% score on the Massive Multitask Language Understanding (MMLU) benchmark. This score indicates a higher ability to comprehend and utilize language across various domains compared to competitors like Google’s Gemini Flash and Anthropic’s Claude Haiku.
The reduced computational power requirement of smaller models like the GPT-4o Mini makes them ideal for companies with limited resources, enabling them to integrate advanced generative AI into their operations without significant investment.
Currently supporting text and vision APIs, OpenAI plans to expand the GPT-4o Mini’s capabilities to include text, image, video, and audio inputs and outputs in the future.
From Thursday, ChatGPT’s Free, Plus, and Team users will gain access to the GPT-4o Mini, replacing the GPT-3.5 Turbo, while enterprise users will follow suit next week.