本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。
Amazon Personalize 终端节点和配额
以下各部分包含有关Amazon Personalize 指南、配额和终端节点的信息。对于可调配额,您可以通过Service Quotas 控制台请求增加限额
亚马逊对终端节点和区域进行个性化设置
有关 Amazon Personalize 按区域列表,请参阅《A mazon Web Services 科技一般参考》中的Amazon区域和终端节点。
Compliance
有关 Amazon Personalize Amazon合规
Service Quotas
您的Amazon账户具有以下Amazon Personalize 的亚马逊个性化配额。
资源 | 配额 |
---|---|
Interactions | |
Minimum number of unique combined historical and event
interactions (after filtering by eventType and
eventValueThreshold , if provided) required to train
a model (create a solution version). |
1000 |
Maximum number of interactions that are considered by a model during training. | 500 million (adjustable) |
Maximum number of distinct event types combined with total number of optional metadata columns in Interactions datasets. | 10 |
Maximum number of metadata columns, excluding reserved fields, in Interactions datasets. | 5 |
Maximum number of characters for categorical data and impression values. | 1000 |
Maximum amount of bulk interactions data per dataset import job with FULL import mode. | 100 GB (increases to 1TB with any increase to 模型考虑的交互作用) |
Maximum amount of bulk interactions data per dataset import job with INCREMENTAL import mode. | 1 GB |
Minimum number of interactions records per dataset import job with FULL or INCREMENTAL import mode. | 1000 |
Users | |
Minimum number of unique users, with at least 2 interactions each, required to train a model (create a solution version). | 25 |
Maximum number of metadata fields for a Users dataset. | 5 |
Maximum number of characters for USER_ID data values. | 256 |
Maximum number of characters for categorical data values. | 1000 characters |
Maximum amount of bulk user data per dataset import job with FULL import mode. | 100 GB |
Maximum amount of bulk user data per dataset import job with INCREMENTAL import mode. | 1 GB |
Items | |
Maximum number of items that are considered by a model during training and generating recommendations. | 750,000 |
Maximum number of metadata fields for an Items dataset. | 50 |
Maximum number of characters for ITEM_ID data values. | 256 |
Maximum number of characters for categorical data values. | 1000 characters |
Maximum number of characters for textual data values for Chinese and Japanese languages. | 7,000 characters |
Maximum number of characters for textual data values for all other languages. | 20,000 characters |
Maximum amount of bulk items data per dataset import job with BULK import mode. | 100 GB |
Maximum amount of bulk item data per dataset import job with INCREMENTAL import mode. | 1 GB |
Individual record import APIs | |
Maximum rate of PutEvents requests per dataset group. |
1000/second |
Maximum number of events in a PutEvents
call. |
10 |
Maximum size of an event. | 10 KB |
Maximum rate of PutItems requests per dataset group. |
10/second |
Maximum number of items in a PutItems
call. |
10 |
Maximum rate of PutUsers requests per dataset group. |
10/second |
Maximum number of users in a PutUsers
call. |
10 |
Legacy recipes | |
Maximum amount of combined data for Users and Items datasets for HRNN-metadata and HRNN-Coldstart recipes. | 5 GB |
Maximum number of cold start items the HRNN-Coldstart recipe supports to train a model (create a solution version). | 80000 |
Minimum number of cold start items the HRNN-Coldstart recipe requires to train a model (create a solution version). | 100 |
Filters | |
Total number of filters per dataset group. | 10 |
Maximum number of distinct dataset fields for a filter. | 5 |
Total number of distinct dataset fields across all filters in a dataset group. | 10 |
Maximum number of interactions per user per event type considered by a filter. | 100 interactions (adjustable) |
GetRecommendations / GetPersonalizedRanking requests | |
Maximum transaction rate (GetRecommendations and
GetPersonalizedRanking requests). |
2500/sec |
Maximum number of GetRecommendations requests
per second per campaign. |
500/sec |
Maximum number of GetPersonalizedRanking
requests per second per campaign. |
500/sec. |
Metric attribution quotas | |
Maximum number of metrics for a metric attribution | 10 |
Maximum number of unique event attribution sources | 100 |
Batch inference jobs | |
Maximum number of input files in a batch inference job. | 1000 |
Maximum size of batch inference job input. | 1 GB |
Maximum number of records per input file in a batch inference job. | 50 million |
Batch segment jobs | |
Maximum number of queries per input file for Item-Affinity recipe. | 500 |
Maximum number of queries per input file for Item-Attribute-Affinity recipe. | 10 |
您的Amazon账户具有以下各区域配额。
资源 | 配额 |
---|---|
有效架构的总数。 | 500 |
有效数据集组的总数。 | 5(可调) |
待处理或正在进行的数据集导入任务总数。 | 5 |
待处理或进行中的批量推理任务总数。 | 5(可调) |
待处理或正在进行的批处理分段作业总数。 | 5 |
待处理或正在处理的解决方案版本总数。 | 20(可调节) |
每个数据集组具有以下配额。
资源 | 配额 |
---|---|
有效解决方案的总数。 | 10(可调节) |
有效市场活动的总数。 | 5(可调) |
推荐人总数。 | 5 |
筛选条件总数。 | 10(可调节) |
所有过滤器中不同数据集字段的总数。 | 10 |
请求增加配额
对于可调配额,您可以使用服务限额控制台请求增加限额
-
模型在训练期间考虑的最大交互数。
-
每个数据集组的有效活动数
-
活动数据集组数
-
每个数据集组的有效筛选条件数
-
每个数据集组的有效解决方案数
-
每次增量导入的数据量
-
一个筛选条件可以考虑的每用户每事件类型最大互动数
-
待处理或正在进行的批量推理作业总数
-
待处理或正在处理的解决方案版本总数
-
最大
PutEvents
请求速率
要申请增加配额,请使用Serv ice Quotas 控制台