Switch to OpenAI Responses API (#1981)

## Summary

#1960 added support for OpenAI's chat completions api.

This change switches to using [OpenAI's new Responses API](https://developers.openai.com/api/docs/guides/migrate-to-responses) instead.

### How to test locally or on Vercel

### How to test locally
1. Set env vars:
`AI_PROVIDER=openai AI_API_KEY= AI_BASE_URL=<> AI_MODEL_NAME=<> AI_REQUEST_HEADERS={"X-Client-Id":"","X-Username":"", AI_ADDITIONAL_OPTIONS = {API_TYPE: "responses"}}`
3. Open Hyperdx's chart explorer and use the AI assistant chart builder
   - e.g. "show me error count by service in the last hour"
4. Confirm the assistant returns a valid chart config.

### References



- Linear Issue:
- Related PRs:
This commit is contained in:
Vineet Ahirkar 2026-03-24 18:46:43 -07:00 committed by GitHub
parent 275dc94161
commit 629009da9e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 12 additions and 7 deletions

View file

@ -0,0 +1,5 @@
---
'@hyperdx/api': patch
---
Update OpenAI model configuration to use the new Responses API

View file

@ -13,9 +13,11 @@ const mockCreateAnthropic = jest.fn(
(_opts?: Record<string, unknown>) => mockAnthropicFactory,
);
const mockOpenAIChatFactory = jest.fn((_model?: string) => mockOpenAIModel);
const mockOpenAIResponsesFactory = jest.fn(
(_model?: string) => mockOpenAIModel,
);
const mockCreateOpenAI = jest.fn((_opts?: Record<string, unknown>) => ({
chat: mockOpenAIChatFactory,
responses: mockOpenAIResponsesFactory,
}));
jest.mock('@ai-sdk/anthropic', () => ({
@ -191,7 +193,7 @@ describe('openai provider', () => {
expect(mockCreateOpenAI).toHaveBeenCalledWith(
expect.objectContaining({ apiKey: 'sk-test' }),
);
expect(mockOpenAIChatFactory).toHaveBeenCalledWith('gpt-4o');
expect(mockOpenAIResponsesFactory).toHaveBeenCalledWith('gpt-4o');
});
it('passes baseURL when AI_BASE_URL is set', () => {

View file

@ -368,9 +368,7 @@ function getAnthropicModel(): LanguageModel {
}
/**
* Configure OpenAI-compatible model.
* Works with any OpenAI Chat Completions-compatible endpoint
* (e.g. Azure OpenAI, OpenRouter, LiteLLM proxies).
* Configure OpenAI-compatible model using the Responses API (/v1/responses).
*/
function getOpenAIModel(): LanguageModel {
const apiKey = config.AI_API_KEY;
@ -399,5 +397,5 @@ function getOpenAIModel(): LanguageModel {
...(Object.keys(headers).length > 0 && { headers }),
});
return openai.chat(config.AI_MODEL_NAME);
return openai.responses(config.AI_MODEL_NAME);
}