OpenAI Introduces Flex Processing for More Affordable, Slower AI Tasks

OpenAI Introduces Flex Processing for More Affordable, Slower AI Tasks

Image Credits:Bryce Durbin / TechCrunch

To increase its competitive edge against other AI companies like Google, OpenAI has launched Flex processing, an API option that lowers the cost of AI model usage by offering slower response times and occasional “resource unavailability.

Flex processing, currently in beta for OpenAI’s recently launched o3 and o4-mini reasoning models, is designed for lower-priority tasks such as model evaluations, data enrichment, and asynchronous workloads, as stated by OpenAI.

Significant Cost Reduction for API Usage

This option reduces API costs by 50%. For o3, Flex processing charges $5 per million input tokens (~750,000 words) and $20 per million output tokens, compared to the standard price of $10 per million input tokens and $40 per million output tokens. For o4-mini, the cost drops to $0.55 per million input tokens and $2.20 per million output tokens, from $1.10 per million input tokens and $4.40 per million output tokens.

The launch of Flex processing comes at a time when the prices for cutting-edge AI models continue to rise, while competitors release more affordable, efficient models aimed at budget-conscious users. Recently, Google introduced Gemini 2.5 Flash, a reasoning model that matches or surpasses the performance of DeepSeek’s R1 at a lower cost per input token.

In a recent email to customers, OpenAI told users that developers in the first three tiers of its usage system must now complete an ID verification process to access the o3 model. The company determines these tiers based on how much users spend on OpenAI services. Access to o3’s reasoning summaries and streaming API also requires this verification.

OpenAI previously stated that it introduced the ID verification process to prevent misuse of its services and to ensure users comply with its usage policies.


Read the original article on: Techcrunch

Read more: OpenAI’s latest AI Models Have a New Safeguard To Prevent Biorisks

Share this post

Leave a Reply