This is the new Amazon CloudFormation Template Reference Guide. Please update your bookmarks and links. For help getting started with CloudFormation, see the Amazon CloudFormation User Guide.
AWS::Bedrock::Flow PromptModelInferenceConfiguration
Contains inference configurations related to model inference for a prompt. For more information, see Inference parameters.
Syntax
To declare this entity in your Amazon CloudFormation template, use the following syntax:
JSON
{ "MaxTokens" :Number, "StopSequences" :[ String, ... ], "Temperature" :Number, "TopP" :Number}
YAML
MaxTokens:NumberStopSequences:- StringTemperature:NumberTopP:Number
Properties
MaxTokens-
The maximum number of tokens to return in the response.
Required: No
Type: Number
Minimum:
0Maximum:
4096Update requires: No interruption
StopSequences-
A list of strings that define sequences after which the model will stop generating.
Required: No
Type: Array of String
Minimum:
0Maximum:
4Update requires: No interruption
Temperature-
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
Required: No
Type: Number
Minimum:
0Maximum:
1Update requires: No interruption
TopP-
The percentage of most-likely candidates that the model considers for the next token.
Required: No
Type: Number
Minimum:
0Maximum:
1Update requires: No interruption