library
turbot/openai
Get Involved
Version
Create Chat Completion
Creates a model response for the given chat conversation.
Run the pipeline
To run this pipeline from your terminal:
flowpipe pipeline run openai.pipeline.create_chat_completion \ --arg 'system_content=<string>' \ --arg 'user_content=<string>'
Use this pipeline
To call this pipeline from your pipeline, use a step:
step "pipeline" "step_name" { pipeline = openai.pipeline.create_chat_completion args = { system_content = <string> user_content = <string> }}
Params
Name | Type | Required | Description | Default |
---|---|---|---|---|
cred | string | Yes | Name for credentials to use. If not provided, the default credentials will be used. | default |
model | string | Yes | ID of the model to use. See the [model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API. | gpt-3.5-turbo |
system_content | string | Yes | The role of the messages author. System in this case. | - |
user_content | string | Yes | The role of the messages author. User in this case. | - |
max_tokens | number | Yes | The maximum number of tokens to generate in the chat completion. | 50 |
temperature | number | Yes | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | 1 |
Outputs
Name | Description |
---|---|
choices |
Tags
type = featured