The implementation of LobeChat's large model AI mainly relies on OpenAI's API, including the core conversation API on the backend and the integrated API on the frontend. Next, we will introduce the implementation approach and code for the backend and frontend separately.
In the file `src/app/api/openai/chat/handler.ts`, we define a `POST` method, which first parses the payload data from the request (i.e., the conversation content sent by the client), and then retrieves the authorization information from the request. Then, we create an `openai` object and call the `createChatCompletion` method, which is responsible for sending the conversation request to OpenAI and returning the result.
In the file `src/app/api/openai/chat/createChatCompletion.ts`, we define the `createChatCompletion` method, which first preprocesses the payload data, then calls OpenAI's `chat.completions.create` method to send the request, and uses the `OpenAIStream` from the [Vercel AI SDK](https://sdk.vercel.ai/docs) to convert the returned result into a streaming response.
In the `src/services/chatModel.ts` file, we define the `fetchChatModel` method, which first preprocesses the payload data, then sends a POST request to the `/chat` endpoint on the backend, and returns the request result.
In the `src/utils/fetch.ts` file, we define the `fetchSSE` method, which uses a streaming approach to retrieve data. When a new data chunk is read, it calls the `onMessageHandle` callback function to process the data chunk, achieving a typewriter-like output effect.
The above is the core implementation of the LobeChat session API. With an understanding of these core codes, further expansion and optimization of LobeChat's AI functionality can be achieved.