library
turbot/openai
Get Involved
Version
Pipeline: Create Chat Completion
Creates a model response for the given chat conversation.
Run the pipeline
To run this pipeline from your terminal:
flowpipe pipeline run openai.pipeline.create_chat_completion \ --arg 'model=<string>' \ --arg 'system_content=<string>' \ --arg 'user_content=<string>' \ --arg 'max_tokens=<number>' \ --arg 'temperature=<number>'
Use this pipeline
To call this pipeline from your pipeline, use a step:
step "pipeline" "step_name" { pipeline = openai.pipeline.create_chat_completion args = { model = <string> system_content = <string> user_content = <string> max_tokens = <number> temperature = <number> }}
Params
Name | Type | Required | Description | Default |
---|---|---|---|---|
conn | connection.openai | Yes | Name of OpenAI connection to use. If not provided, the default OpenAI connection will be used. | connection.openai.default |
model | string | Yes | ID of the model to use. See the [model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API. | - |
system_content | string | Yes | The role of the messages author. System in this case. | - |
user_content | string | Yes | The role of the messages author. User in this case. | - |
max_tokens | number | Yes | The maximum number of tokens to generate in the chat completion. | - |
temperature | number | Yes | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | - |
Outputs
Name | Description |
---|---|
choices |
Tags
recommended = true