library
turbot/openai
OverviewPipelines
0
Triggers
0
Variables
GitHub

Create Chat Completion

Creates a model response for the given chat conversation.

Run the pipeline

To run this pipeline from your terminal:

flowpipe pipeline run openai.pipeline.create_chat_completion \
--arg 'system_content=<string>' \
--arg 'user_content=<string>'

Use this pipeline

To call this pipeline from your pipeline, use a step:

step "pipeline" "step_name" {
pipeline = openai.pipeline.create_chat_completion
args = {
system_content = <string>
user_content = <string>
}
}

Params

NameTypeRequiredDefaultDescription
credstringYesdefaultName for credentials to use. If not provided, the default credentials will be used.
modelstringYesgpt-3.5-turboID of the model to use. See the [model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
system_contentstringYes-The role of the messages author. System in this case.
user_contentstringYes-The role of the messages author. User in this case.
max_tokensnumberYes50The maximum number of tokens to generate in the chat completion.
temperaturenumberYes1What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

Outputs

NameDescription
choices