NEW GPT-4o mini Is More Powerful

OpenAI reveals GPT-4o mini Powerful AI model

On Thursday, OpenAI unveiled its latest creation, GPT-4o mini, a compact AI model that the company describes as both more affordable and quicker than its existing advanced AI solutions. This model is set to be made available to developers and consumers through the ChatGPT platform and its mobile app, with the launch starting today. Businesses will have the opportunity to access it next week.

The firm notes that GPT-4o mini excels over leading small AI models in tasks that require both text and image analysis reasoning. As the capabilities of small AI models continue to advance, they are increasingly favored by developers for their rapidity and cost-effectiveness in comparison to their larger counterparts, such as GPT-4 Omni or Claude 3.5 Sonnet. They offer an effective solution for executing tasks that are performed frequently by AI models.

Advantage

The GPT-4o mini offers a wide range of capabilities at an affordable price and with minimal latency. It can handle tasks such as chaining or parallelizing multiple model calls (e.g., invoking multiple APIs), processing large amounts of context for the model (e.g., complete code base or conversation history), and engaging with customers through rapid, real-time text responses (e.g., customer support chatbots).

The GPT-4o mini model currently supports text and vision inputs/outputs through the API, with plans to add support for video and audio in the future. It has an impressive context window of 128K tokens and can generate up to 16K output tokens per request. The model’s knowledge base extends up to October 2023. Additionally, the improved tokenizer shared with GPT-4o allows for more cost-effective handling of non-English text.

Superior Textual Intelligence and Multimodal Reasoning

The GPT-4o mini outshines the GPT-3.5 Turbo and other compact versions in academic tests for both textual understanding and reasoning across different modes, and it can handle the same variety of languages as the GPT-4o mini. Additionally, it shows excellent results in calling functions, allowing developers to create programs that retrieve information or perform tasks with outside systems, and it outperforms GPT-3.5 Turbo in tasks that require understanding context over a long period.

Comparison Based On Benchmark

GPT-4o mini is set to take over GPT-3.5 Turbo as the smallest AI model provided by OpenAI. The firm asserts that its latest AI model achieves an 82% score on MMLU, a standard for evaluating reasoning skills, surpassing the scores of Gemini 1.5 Flash at 79% and Claude 3 Haiku at 75%, as per data from Artificial Analysis. On the MGSM scale, which evaluates mathematical reasoning, GPT-4o mini achieves an 87% score, beating Flash at 78% and Haiku at 72%.

GPT-4o mini benchmark comparison

For developers utilizing OpenAI’s API, the GPT-4o mini is offered at a rate of 15 cents per million input tokens and 60 cents per million output tokens. It features a context window of 128,000 tokens, which is roughly equivalent to the length of a book, and a knowledge cutoff date of October 2023.

OpenAI has not provided an exact size for the GPT-4o mini but has stated that it falls within the same category as other compact AI models like Llama 3 8b, Claude Haiku, and Gemini 1.5 Flash. However, the company asserts that the GPT-4o mini is not only faster and more economical but also more intelligent than the leading small models, as evidenced by pre-launch evaluations in the LMSYS.org chatbot domain. Initial independent evaluations appear to validate this claim.

If You have any questions please comment bellow. Thanks for reading.

About the author

Biplab Bhattacharya

Hi I am Biplab , an aspiring blogger with an obsession for all things tech. This blog is dedicated to helping people learn about technology.

View all posts

Leave a Reply

Your email address will not be published. Required fields are marked *