This bit feels like we are being pushed away from the existing API for non-technical reasons?
> When using Chat Completions, the model always retrieves information from the web before responding to your query. To use web_search_preview as a tool that models like gpt-4o and gpt-4o-mini invoke only when necessary, switch to using the Responses API.
Porting over to the new Responses API is non-trivial, and we already have history, RAG and other things an assistant needs already.
I can’t find that text in the announcement. In fact it sounds like you have to use a specific model with the chat completions endpoint to get web searches.
> When using Chat Completions, the model always retrieves information from the web before responding to your query. To use web_search_preview as a tool that models like gpt-4o and gpt-4o-mini invoke only when necessary, switch to using the Responses API.
Porting over to the new Responses API is non-trivial, and we already have history, RAG and other things an assistant needs already.