library
turbot/openai
OverviewPipelines
0
Triggers
0
Variables
GitHub

Pipeline: Create Chat Completion

Creates a model response for the given chat conversation.

Run the pipeline

To run this pipeline from your terminal:

flowpipe pipeline run openai.pipeline.create_chat_completion \
--arg 'model=<string>' \
--arg 'system_content=<string>' \
--arg 'user_content=<string>' \
--arg 'max_tokens=<number>' \
--arg 'temperature=<number>'

Use this pipeline

To call this pipeline from your pipeline, use a step:

step "pipeline" "step_name" {
pipeline = openai.pipeline.create_chat_completion
args = {
model = <string>
system_content = <string>
user_content = <string>
max_tokens = <number>
temperature = <number>
}
}

Params

NameTypeRequiredDescriptionDefault
conn
connection.openai
YesName of OpenAI connection to use. If not provided, the default OpenAI connection will be used.connection.openai.default
model
string
YesID of the model to use. See the [model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.-
system_content
string
YesThe role of the messages author. System in this case.-
user_content
string
YesThe role of the messages author. User in this case.-
max_tokens
number
YesThe maximum number of tokens to generate in the chat completion.-
temperature
number
YesWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.-

Outputs

NameDescription
choices

Tags

recommended = true