sample
turbot/flowpipe-samples/summarize-github-issue-with-openai
OverviewPipelines
0
Triggers
0
Variables
GitHub

Pipeline: Summarize GitHub Issue with OpenAI

Summarize a GitHub issue with OpenAI.

Run the pipeline

To run this pipeline from your terminal:

flowpipe pipeline run summarize_github_issue_with_openai.pipeline.summarize_github_issue_with_openai \
--arg 'github_repository_owner=<string>' \
--arg 'github_repository_name=<string>' \
--arg 'github_issue_number=<number>' \
--arg 'openai_system_content=<string>' \
--arg 'openai_model=<string>' \
--arg 'openai_max_tokens=<number>' \
--arg 'openai_temperature=<number>' \
--arg 'slack_channel=<string>'

Use this pipeline

To call this pipeline from your pipeline, use a step:

step "pipeline" "step_name" {
pipeline = summarize_github_issue_with_openai.pipeline.summarize_github_issue_with_openai
args = {
github_repository_owner = <string>
github_repository_name = <string>
github_issue_number = <number>
openai_system_content = <string>
openai_model = <string>
openai_max_tokens = <number>
openai_temperature = <number>
slack_channel = <string>
}
}

Params

NameTypeRequiredDescriptionDefault
github_conn
connection.github
YesName of Github connection to use. If not provided, the default Github connection will be used.connection.github.default
slack_conn
connection.slack
YesName of Slack connection to use. If not provided, the default Slack connection will be used.connection.slack.default
openai_conn
connection.openai
YesName of OpenAI connection to use. If not provided, the default OpenAI connection will be used.connection.openai.default
github_repository_owner
string
YesThe organization or user name.-
github_repository_name
string
YesThe name of the repository.-
github_issue_number
number
YesThe number of the issue.-
openai_system_content
string
YesThe role of the messages author. System in this case.-
openai_model
string
YesID of the model to use. See the [model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.-
openai_max_tokens
number
YesThe maximum number of tokens to generate in the chat completion.-
openai_temperature
number
YesWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.-
slack_channel
string
YesChannel, private group, or IM channel to send message to.-

Outputs

NameDescription
openai_response