library
turbot/openai
OverviewPipelines
0
Triggers
0
Variables
GitHub

Create Chat Completion

Creates a model response for the given chat conversation.

Run the pipeline

To run this pipeline from your terminal:

flowpipe pipeline run openai.pipeline.create_chat_completion \
--arg 'system_content=<string>' \
--arg 'user_content=<string>'

Use this pipeline

To call this pipeline from your pipeline, use a step:

step "pipeline" "step_name" {
pipeline = openai.pipeline.create_chat_completion
args = {
system_content = <string>
user_content = <string>
}
}

Params

NameTypeRequiredDescriptionDefault
credstringYesName for credentials to use. If not provided, the default credentials will be used.default
modelstringYesID of the model to use. See the [model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.gpt-3.5-turbo
system_contentstringYesThe role of the messages author. System in this case.-
user_contentstringYesThe role of the messages author. User in this case.-
max_tokensnumberYesThe maximum number of tokens to generate in the chat completion.50
temperaturenumberYesWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.1

Outputs

NameDescription
choices

Tags

type = featured