OpenAI finally gave birth to a dev digest.

If you missed it, the Response API is their replacement for the “standard” Chat Completion API, which barely evolves. https://openai.com/index/new-tools-and-features-in-the-responses-api/

  • [Response API] Added support for remote MCPs. It wasn’t hard to do on my side either. And I might keep doing it that way to keep control and be able to influence decisions. But for very simple projects it’s an interesting option.
  • [Response API] Added gpt-image-1 as a tool with streaming and image editing.
  • [Response API] Code Interpreter Tool: writing and running Python code Sounds cool, and in some cases it improves answer quality, but 3 cents per container creation isn’t worth it for my personal tasks. Though for some kind of data-satanist assistant it might be totally justified. Especially paired with File search, which lets you upload files and search them.
  • [Response API] Background mode: so you don’t wait synchronously, but submit and poll status.
  • [Response API] Reasoning summaries: you still can’t view chain-of-thought for reasoning models (I still don’t get why), but now you can at least get a summary. Curious if it will differ much from the thoughts the model writes directly in the answer (or tool call) if you ask. In theory, useful for “debugging” prompts.
  • Added “flex” mode to all APIs: models respond slower, but prices like in Batch API + cache discount for input tokens. For now it’s beta and only for o3 and o4-mini. https://platform.openai.com/docs/guides/flex-processing
  • Enabled fine-tuning for o4-mini.
  • In Codex CLI you can now log in via ChatGPT. ChatGPT Plus and Pro users can redeem $5 and $50 in free API credits, respectively, for the next 2 weeks.
  • Added a new model to Codex CLI codex-mini, which you can get 10M free tokens for if you sell your soul give access to your data.
  • Codex is now available in ChatGPT Pro and Team.