OpenAI has launched a new Batch API designed to assist developers in managing large asynchronous AI tasks more efficiently and cost-effectively. The API offers a 50% discount on all tokens, significantly higher rate limits of up to 250 million enqueued tokens for GPT-4T, and the ability to upload bulk files with results returned within 24 hours. This initiative not only aims to reduce costs but also to streamline the processing of tasks such as summarization, translation, and image classification. Additionally, plans are in place to release this API on Azure soon.
Announcing the BatchAPI - receive results for your requests within 24 hours at 50% the cost and higher rate limits https://t.co/Bf52xjkbGh
Batch API — 50% off for async requests to our models: https://t.co/TYp0rJ6jAy
Developers managing large async AI tasks, this is for you! Our new Batch API is here to reduce costs and increase your rate limits: https://t.co/xOCpAXhsX4
We just launched our Batch API: - 50% discount on all tokens - Huge rate limits (up to 250M enqueued tokens GPT-4T) - Upload bulk files and get the completions within 24hrs https://t.co/Y209TMUTap 🧵on how to get started https://t.co/a5n1eyQFoM
Batch API for OpenAI models - we will release it on Azure as well soon! https://t.co/0OxMtQhmTN
Introducing the Batch API: save costs and get higher rate limits on async tasks (such as summarization, translation, and image classification). Just upload a file of bulk requests, receive results within 24 hours, and get 50% off API prices: https://t.co/ls8DjR6qA9 https://t.co/3W1GHijV3S