聊天
创建聊天完成
POST /chat/completions
从选定的 LLM 生成聊天消息完成。
An ID you can pass to refer to one or more requests later on. If not provided, Portkey generates a trace ID automatically for each request. Docs
An ID you can pass to refer to a span under a trace.
Link a child span to a parent span
Name for the Span ID
Pass any arbitrary metadata along with your request
Partition your Portkey cache store based on custom strings, ignoring metadata and other headers
Forces a cache refresh for your request by making a new API call and storing the updated value
ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
gpt-4-turboNumber between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
See more information about frequency and presence penalties.
0Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
falseAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
The maximum number of tokens that can be generated in the chat completion.
The total length of input tokens and generated tokens is limited by the model's context length. Example Python code for counting tokens.
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
1Example: 1Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
See more information about frequency and presence penalties.
0An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} }enables Structured Outputs which ensures the model will match your
supplied JSON schema. Works across all the providers that support this functionality. OpenAI & Azure OpenAI, Gemini & Vertex AI.
Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON.
Using json_schema is preferred for models that support it.
This feature is in Beta.
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.
Up to 4 sequences where the API will stop generating further tokens.
nullIf set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. Example Python code.
falseWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
1Example: 1An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
1Example: 1Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
none is the default when no tools are present. auto is the default if tools are present.
none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.
Whether to enable parallel function calling during tool use.
trueA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
user-1234Deprecated in favor of tool_choice.
Controls which (if any) function is called by the model.
none means the model will not call a function and instead generates a message.
auto means the model can pick between generating a message or calling a function.
Specifying a particular function via {"name": "my_function"} forces the model to call that function.
none is the default when no functions are present. auto is the default if functions are present.
none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function.
OK
OK
请求体类似于 OpenAI 的 聊天完成请求,响应将是 聊天完成对象。选择 stream:true 时,响应将是 聊天完成块 对象的流。
Portkey 会自动转换 OpenAI 以外的 LLM 的参数。如果某些参数在其他 LLM 中不存在,则将被删除。
SDK 使用
Portkey SDK 中的 chat.completions.create 方法使您能够使用各种大型语言模型(LLMs)生成聊天补全。此方法旨在与 OpenAI 聊天补全 API 相似,为习惯于 OpenAI 服务的用户提供熟悉的接口。
方法签名
有关 REST API 示例,请滚动 这里。
参数
requestParams (Object): 聊天完成请求的参数,详细说明聊天交互。这些参数类似于 OpenAI 请求签名。Portkey 会自动转换适用于除 OpenAI 以外的 LLM 的参数。如果某些参数在其他 LLM 中不存在,它们将被丢弃。Portkey 默认是多模态的,因此与视觉模型相关的参数,如
image_url、base64 data也受到支持。configParams (Object): 请求的附加配置选项。这是一个可选参数,可以包含此特定请求的自定义配置选项。这些选项将覆盖在 Portkey 客户端中设置的配置。完整的配置参数列表可以在 这里 找到。
示例用法
1. 默认
聊天完成端点接受消息对象的数组,并以聊天消息格式返回完成内容。
在 REST 调用中,x-portkey-api-key 是必需的请求头,可以与以下选项配对以发送提供者详细信息:
x-portkey-provider和Authorization(或类似的身份验证头)x-portkey-virtual-keyx-portkey-config
使用提供者 + 身份验证的示例请求:
使用虚拟密钥的示例请求:
使用配置的示例请求:
您可以在 Portkey 请求中发送其他 3 个头部
x-portkey-trace-id: 发送追踪 IDx-portkey-metadata: 发送自定义元数据x-portkey-cache-force-refresh: 强制刷新此请求的缓存
使用这 3 个头部的示例请求:
要向本地或私有托管的模型发送聊天请求,请查看 Ollama 的指南。
2. 图像输入(视觉模型)
聊天完成 API 还支持将图像添加到视觉模型(GPT-4V、Gemini 等)的请求中。
3. 流式聊天完成
在请求中将 stream 参数设置为 true,以启用来自聊天完成 API 的流式响应。
聊天完成端点接受消息对象的数组,并以聊天消息格式返回完成内容。
4. 函数
tools 参数接受可以专门为支持函数调用的模型发送的函数。
5. 每个请求的自定义配置
可能需要为每个请求覆盖 config 值,或者在请求中发送 trace id 和 metadata 的选项。这可以通过在请求中附加这些参数来实现。
使用配置的示例请求:
响应格式
响应将遵循 Portkey API 的 Chat Completions Object 架构,通常包括生成的消息和相关元数据。
Last updated