Create Chat Completion
POST/chat/completions
Creates a model response for the given chat conversation.
Request
- application/json
Body
required
Array [
- System message
- User message
- Assistant message
]
- MOD1
- MOD2
Array [
]
Array [
]
- ChatCompletionToolChoice
- ChatCompletionNamedToolChoice
messages
object[]
required
Possible values: >= 1
A list of messages comprising the conversation so far.
oneOf
The contents of the system message.
Possible values: [system
]
The role of the messages author, in this case system
.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
The contents of the user message.
Possible values: [user
]
The role of the messages author, in this case user
.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
The contents of the assistant message.
Possible values: [assistant
]
The role of the messages author, in this case assistant
.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
(Beta) Set this to true
to force the model to start its answer by the content of the supplied prefix in this assistant
message.
You must set base_url="https://api.deepseek.com/beta"
to use this feature.
Possible values: [deepseek-chat
]
ID of the model to use. You can use deepseek-chat.
Possible values: >= -2
and <= 2
Default value: 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Possible values: > 1
The maximum number of tokens that can be generated in the chat completion.
The total length of input tokens and generated tokens is limited by the model's context length.
Possible values: >= -2
and <= 2
Default value: 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
response_format
object
nullable
An object specifying the format that the model must output. Setting to { "type": "json_object" } enables JSON Output, which guarantees the message the model generates is valid JSON.
Important: When using JSON Output, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
Possible values: [text
, json_object
]
Default value: text
Must be one of text
or json_object
.
stop
object
nullable
Up to 16 sequences where the API will stop generating further tokens.
oneOf
string
string
If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events (SSE) as they become available, with the stream terminated by a data: [DONE]
message.
stream_options
object
nullable
Options for streaming response. Only set this when you set stream: true
.
If set, an additional chunk will be streamed before the data: [DONE]
message. The usage
field on this chunk shows the token usage statistics for the entire request, and the choices
field will always be an empty array. All other chunks will also include a usage
field, but with a null value.
Possible values: <= 2
Default value: 1
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p
but not both.
Possible values: <= 1
Default value: 1
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature
but not both.
tools
object[]
nullable
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
Possible values: [function
]
The type of the tool. Currently, only function
is supported.
function
object
required
A description of what the function does, used by the model to choose when and how to call the function.
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
parameters
object
The parameters the functions accepts, described as a JSON Schema object. See the Function Calling Guide for examples, and the JSON Schema reference for documentation about the format.
Omitting parameters
defines a function with an empty parameter list.
The parameters the functions accepts, described as a JSON Schema object. See the Function Calling Guide for examples, and the JSON Schema reference for documentation about the format.
Omitting parameters
defines a function with an empty parameter list.
tool_choice
object
nullable
Controls which (if any) tool is called by the model.
none
means the model will not call any tool and instead generates a message.
auto
means the model can pick between generating a message or calling one or more tools.
required
means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}}
forces the model to call that tool.
none
is the default when no tools are present. auto
is the default if tools are present.
oneOf
string
Possible values: [none
, auto
, required
]
Possible values: [function
]
The type of the tool. Currently, only function
is supported.
function
object
required
The name of the function to call.
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content
of message
.
Possible values: <= 20
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs
must be set to true
if this parameter is used.
Responses
- 200 (No streaming)
- 200 (Streaming)
OK, returns a chat completion object
- application/json
- Schema
- Example (from schema)
- Example
Schema
Array [
Array [
]
Array [
Array [
]
]
]
A unique identifier for the chat completion.
choices
object[]
required
A list of chat completion choices.
Possible values: [stop
, length
, content_filter
, tool_calls
, insufficient_system_resource
]
The reason the model stopped generating tokens. This will be stop
if the model hit a natural stop point or a provided stop sequence,
length
if the maximum number of tokens specified in the request was reached,
content_filter
if content was omitted due to a flag from our content filters,
tool_calls
if the model called a tool,
or insufficient_system_resource
if the request is interrupted due to insufficient resource of the inference system.
The index of the choice in the list of choices.
message
object
required
A chat completion message generated by the model.
The contents of the message.
tool_calls
object[]
The tool calls generated by the model, such as function calls.
The ID of the tool call.
Possible values: [function
]
The type of the tool. Currently, only function
is supported.
function
object
required
The function that the model called.
The name of the function to call.
The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
Possible values: [assistant
]
The role of the author of this message.
logprobs
object
nullable
required
Log probability information for the choice.
content
object[]
nullable
required
A list of message content tokens with log probability information.
The token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0
is used to signify that the token is very unlikely.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null
if there is no bytes representation for the token.
top_logprobs
object[]
required
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs
returned.
The token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0
is used to signify that the token is very unlikely.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null
if there is no bytes representation for the token.
The Unix timestamp (in seconds) of when the chat completion was created.
The model used for the chat completion.
This fingerprint represents the backend configuration that the model runs with.
Possible values: [chat.completion
]
The object type, which is always chat.completion
.
usage
object
Usage statistics for the completion request.
Number of tokens in the generated completion.
Number of tokens in the prompt. It equals prompt_cache_hit_tokens + prompt_cache_miss_tokens.
Number of tokens in the prompt that hits the context cache.
Number of tokens in the prompt that misses the context cache.
Total number of tokens used in the request (prompt + completion).
{
"id": "string",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "string",
"tool_calls": [
{
"id": "string",
"type": "function",
"function": {
"name": "string",
"arguments": "string"
}
}
],
"role": "assistant"
},
"logprobs": {
"content": [
{
"token": "string",
"logprob": 0,
"bytes": [
0
],
"top_logprobs": [
{
"token": "string",
"logprob": 0,
"bytes": [
0
]
}
]
}
]
}
}
],
"created": 0,
"model": "string",
"system_fingerprint": "string",
"object": "chat.completion",
"usage": {
"completion_tokens": 0,
"prompt_tokens": 0,
"prompt_cache_hit_tokens": 0,
"prompt_cache_miss_tokens": 0,
"total_tokens": 0
}
}
{
"id": "930c60df-bf64-41c9-a88e-3ec75f81e00e",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Hello! How can I help you today?",
"role": "assistant"
}
}
],
"created": 1705651092,
"model": "deepseek-chat",
"object": "chat.completion",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 16,
"total_tokens": 26
}
}
OK, returns a streamed sequence of chat completion chunk
objects
- text/event-stream
- Schema
- Example
Schema
Array [
Array [
]
]
A unique identifier for the chat completion. Each chunk has the same ID.
choices
object[]
required
A list of chat completion choices.
delta
object
required
A chat completion delta generated by streamed model responses.
The contents of the chunk message.
Possible values: [assistant
]
The role of the author of this message.
Possible values: [stop
, length
, content_filter
, tool_calls
, insufficient_system_resource
]
The reason the model stopped generating tokens. This will be stop
if the model hit a natural stop point or a provided stop sequence,
length
if the maximum number of tokens specified in the request was reached,
content_filter
if content was omitted due to a flag from our content filters,
tool_calls
if the model called a tool,
or insufficient_system_resource
if the request is interrupted due to insufficient resource of the inference system.
The index of the choice in the list of choices.
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
The model to generate the completion.
This fingerprint represents the backend configuration that the model runs with.
Possible values: [chat.completion.chunk
]
The object type, which is always chat.completion.chunk
.
data: {"id": "1f633d8bfc032625086f14113c411638", "choices": [{"index": 0, "delta": {"content": "", "role": "assistant"}, "finish_reason": null, "logprobs": null}], "created": 1718345013, "model": "deepseek-chat", "system_fingerprint": "fp_a49d71b8a1", "object": "chat.completion.chunk", "usage": null}
data: {"choices": [{"delta": {"content": "Hello", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": "!", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " How", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " can", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " I", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " assist", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " you", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " today", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": "?", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": "", "role": null}, "finish_reason": "stop", "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1", "usage": {"completion_tokens": 9, "prompt_tokens": 17, "total_tokens": 26}}
data: [DONE]