Create Chat Completion
POST/chat/completions
Creates a model response for the given chat conversation.
Request
- application/json
Body
required
Array [
- System message
- User message
- Assistant message
]
- MOD1
- MOD2
Array [
]
messages
object[]
required
Possible values: >= 1
A list of messages comprising the conversation so far.
oneOf
The contents of the system message.
Possible values: [system
]
The role of the messages author, in this case system
.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
The contents of the user message.
Possible values: [user
]
The role of the messages author, in this case user
.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
The contents of the assistant message.
Possible values: [assistant
]
The role of the messages author, in this case assistant
.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
Possible values: [deepseek-chat
, deepseek-coder
]
ID of the model to use. You can use either usedeepseek-coder or deepseek-chat.
Possible values: >= -2
and <= 2
Default value: 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Possible values: > 1
The maximum number of tokens that can be generated in the chat completion.
The total length of input tokens and generated tokens is limited by the model's context length.
Possible values: >= -2
and <= 2
Default value: 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
stop
object
Up to 8 sequences where the API will stop generating further tokens.
oneOf
string
string
If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events (SSE) as they become available, with the stream terminated by a data: [DONE]
message.
Possible values: <= 2
Default value: 1
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p
but not both.
Possible values: <= 1
Default value: 1
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature
but not both.
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content
of message
.
Possible values: <= 20
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs
must be set to true
if this parameter is used.
Responses
- 200 (No streaming)
- 200 (Streaming)
OK, returns a chat completion object
- application/json
- Schema
- Example (from schema)
- Example
Schema
Array [
Array [
Array [
]
]
]
A unique identifier for the chat completion.
choices
object[]
required
A list of chat completion choices.
Possible values: [stop
, length
, content_filter
]
The reason the model stopped generating tokens. This will be stop
if the model hit a natural stop point or a provided stop sequence,
length
if the maximum number of tokens specified in the request was reached,
content_filter
if content was omitted due to a flag from our content filters.
The index of the choice in the list of choices.
message
object
required
A chat completion message generated by the model.
The contents of the message.
Possible values: [assistant
]
The role of the author of this message.
logprobs
object
nullable
required
Log probability information for the choice.
content
object[]
nullable
required
A list of message content tokens with log probability information.
The token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0
is used to signify that the token is very unlikely.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null
if there is no bytes representation for the token.
top_logprobs
object[]
required
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs
returned.
The token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0
is used to signify that the token is very unlikely.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null
if there is no bytes representation for the token.
The Unix timestamp (in seconds) of when the chat completion was created.
The model used for the chat completion.
This fingerprint represents the backend configuration that the model runs with.
Possible values: [chat.completion
]
The object type, which is always chat.completion
.
usage
object
Usage statistics for the completion request.
Number of tokens in the generated completion.
Number of tokens in the prompt.
Total number of tokens used in the request (prompt + completion).
{
"id": "string",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "string",
"role": "assistant"
},
"logprobs": {
"content": [
{
"token": "string",
"logprob": 0,
"bytes": [
0
],
"top_logprobs": [
{
"token": "string",
"logprob": 0,
"bytes": [
0
]
}
]
}
]
}
}
],
"created": 0,
"model": "string",
"system_fingerprint": "string",
"object": "chat.completion",
"usage": {
"completion_tokens": 0,
"prompt_tokens": 0,
"total_tokens": 0
}
}
{
"id": "930c60df-bf64-41c9-a88e-3ec75f81e00e",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Hello! How can I help you today?",
"role": "assistant"
}
}
],
"created": 1705651092,
"model": "deepseek-chat",
"object": "chat.completion",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 16,
"total_tokens": 26
}
}
OK, returns a streamed sequence of chat completion chunk
objects
- text/event-stream
- Schema
- Example
Schema
Array [
Array [
]
]
A unique identifier for the chat completion. Each chunk has the same ID.
choices
object[]
required
A list of chat completion choices.
delta
object
required
A chat completion delta generated by streamed model responses.
The contents of the chunk message.
Possible values: [assistant
]
The role of the author of this message.
Possible values: [stop
, length
, content_filter
]
The reason the model stopped generating tokens. This will be stop
if the model hit a natural stop point or a provided stop sequence,
length
if the maximum number of tokens specified in the request was reached,
content_filter
if content was omitted due to a flag from our content filters.
The index of the choice in the list of choices.
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
The model to generate the completion.
This fingerprint represents the backend configuration that the model runs with.
Possible values: [chat.completion.chunk
]
The object type, which is always chat.completion.chunk
.
data: {"id": "1f633d8bfc032625086f14113c411638", "choices": [{"index": 0, "delta": {"content": "", "role": "assistant"}, "finish_reason": null, "logprobs": null}], "created": 1718345013, "model": "deepseek-chat", "system_fingerprint": "fp_a49d71b8a1", "object": "chat.completion.chunk", "usage": null}
data: {"choices": [{"delta": {"content": "Hello", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": "!", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " How", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " can", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " I", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " assist", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " you", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": " today", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": "?", "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1"}
data: {"choices": [{"delta": {"content": "", "role": null}, "finish_reason": "stop", "index": 0, "logprobs": null}], "created": 1718345013, "id": "1f633d8bfc032625086f14113c411638", "model": "deepseek-chat", "object": "chat.completion.chunk", "system_fingerprint": "fp_a49d71b8a1", "usage": {"completion_tokens": 9, "prompt_tokens": 17, "total_tokens": 26}}
data: [DONE]