@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class GuardrailContentFilter extends Object implements Serializable, Cloneable, StructuredPojo
Contains filter strengths for harmful content. Guardrails support the following content filters to detect and filter harmful user inputs and FM-generated outputs.
Hate – Describes language or a statement that discriminates, criticizes, insults, denounces, or dehumanizes a person or group on the basis of an identity (such as race, ethnicity, gender, religion, sexual orientation, ability, and national origin).
Insults – Describes language or a statement that includes demeaning, humiliating, mocking, insulting, or belittling language. This type of language is also labeled as bullying.
Sexual – Describes language or a statement that indicates sexual interest, activity, or arousal using direct or indirect references to body parts, physical traits, or sex.
Violence – Describes language or a statement that includes glorification of or threats to inflict physical pain, hurt, or injury toward a person, group or thing.
Content filtering depends on the confidence classification of user inputs and FM responses across each of the four harmful categories. All input and output statements are classified into one of four confidence levels (NONE, LOW, MEDIUM, HIGH) for each harmful category. For example, if a statement is classified as Hate with HIGH confidence, the likelihood of the statement representing hateful content is high. A single statement can be classified across multiple categories with varying confidence levels. For example, a single statement can be classified as Hate with HIGH confidence, Insults with LOW confidence, Sexual with NONE confidence, and Violence with MEDIUM confidence.
For more information, see Guardrails content filters.
This data type is used in the following API operations:
Constructor and Description |
---|
GuardrailContentFilter() |
Modifier and Type | Method and Description |
---|---|
GuardrailContentFilter |
clone() |
boolean |
equals(Object obj) |
String |
getInputStrength()
The strength of the content filter to apply to prompts.
|
String |
getOutputStrength()
The strength of the content filter to apply to model responses.
|
String |
getType()
The harmful category that the content filter is applied to.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setInputStrength(String inputStrength)
The strength of the content filter to apply to prompts.
|
void |
setOutputStrength(String outputStrength)
The strength of the content filter to apply to model responses.
|
void |
setType(String type)
The harmful category that the content filter is applied to.
|
String |
toString()
Returns a string representation of this object.
|
GuardrailContentFilter |
withInputStrength(GuardrailFilterStrength inputStrength)
The strength of the content filter to apply to prompts.
|
GuardrailContentFilter |
withInputStrength(String inputStrength)
The strength of the content filter to apply to prompts.
|
GuardrailContentFilter |
withOutputStrength(GuardrailFilterStrength outputStrength)
The strength of the content filter to apply to model responses.
|
GuardrailContentFilter |
withOutputStrength(String outputStrength)
The strength of the content filter to apply to model responses.
|
GuardrailContentFilter |
withType(GuardrailContentFilterType type)
The harmful category that the content filter is applied to.
|
GuardrailContentFilter |
withType(String type)
The harmful category that the content filter is applied to.
|
public void setType(String type)
The harmful category that the content filter is applied to.
type
- The harmful category that the content filter is applied to.GuardrailContentFilterType
public String getType()
The harmful category that the content filter is applied to.
GuardrailContentFilterType
public GuardrailContentFilter withType(String type)
The harmful category that the content filter is applied to.
type
- The harmful category that the content filter is applied to.GuardrailContentFilterType
public GuardrailContentFilter withType(GuardrailContentFilterType type)
The harmful category that the content filter is applied to.
type
- The harmful category that the content filter is applied to.GuardrailContentFilterType
public void setInputStrength(String inputStrength)
The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
inputStrength
- The strength of the content filter to apply to prompts. As you increase the filter strength, the
likelihood of filtering harmful content increases and the probability of seeing harmful content in your
application reduces.GuardrailFilterStrength
public String getInputStrength()
The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
GuardrailFilterStrength
public GuardrailContentFilter withInputStrength(String inputStrength)
The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
inputStrength
- The strength of the content filter to apply to prompts. As you increase the filter strength, the
likelihood of filtering harmful content increases and the probability of seeing harmful content in your
application reduces.GuardrailFilterStrength
public GuardrailContentFilter withInputStrength(GuardrailFilterStrength inputStrength)
The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
inputStrength
- The strength of the content filter to apply to prompts. As you increase the filter strength, the
likelihood of filtering harmful content increases and the probability of seeing harmful content in your
application reduces.GuardrailFilterStrength
public void setOutputStrength(String outputStrength)
The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
outputStrength
- The strength of the content filter to apply to model responses. As you increase the filter strength, the
likelihood of filtering harmful content increases and the probability of seeing harmful content in your
application reduces.GuardrailFilterStrength
public String getOutputStrength()
The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
GuardrailFilterStrength
public GuardrailContentFilter withOutputStrength(String outputStrength)
The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
outputStrength
- The strength of the content filter to apply to model responses. As you increase the filter strength, the
likelihood of filtering harmful content increases and the probability of seeing harmful content in your
application reduces.GuardrailFilterStrength
public GuardrailContentFilter withOutputStrength(GuardrailFilterStrength outputStrength)
The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
outputStrength
- The strength of the content filter to apply to model responses. As you increase the filter strength, the
likelihood of filtering harmful content increases and the probability of seeing harmful content in your
application reduces.GuardrailFilterStrength
public String toString()
toString
in class Object
Object.toString()
public GuardrailContentFilter clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.