Utilizing the stream interface for completions, the following code snippet demonstrates the creation of a completion and the use of various tools:
const completion = await client.chat.completions.create({
messages,
model: '...',
stream: true,
stream_options: {
include_usage: true,
},
tool_choice: {
type: 'function',
function: {
name: 'searchWeb',
},
},
tools: [searchWeb],
user: chatSession.userAccount.uid,
});
If you wish to enforce the usage of the webSearch
tool in this LLM, additional steps need to be taken.
The response handling logic includes iterating through chunks of data received from the completion stream. Each chunk contains choices made by the model with associated tool calls:
type ToolCall = {
function?: {
arguments?: string;
name?: string;
};
id?: string;
index: number;
type?: 'function';
};
const toolCalls: Record<string, ToolCall> = {};
for await (const chunk of completion) {
// Handling of choice details and tool calls goes here...
}
To guide the LLM on the specific function invocation and responses, future interactions must include structured messages containing tool call information:
messages [
{
content: 'What is the best framework for testing in Node.js?',
role: 'user'
},
{
content: null,
role: 'assistant',
tool_calls: [{...}] // Sample tool call structure
},
{
content: '{"answer":"Recommended answer."}',
role: 'tool',
tool_call_id: 'call_id_here'
}
]
To provide detailed formatting instructions for the final response, consider embedding directives within the tool_calls
section of assistant messages.