排除 Neo 推理错误 - Amazon SageMaker
AWS 文档中描述的 AWS 服务或功能可能因区域而异。要查看适用于中国区域的差异,请参阅中国的 AWS 服务入门

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

排除 Neo 推理错误

本部分包含有关如何防止和解决部署和/或调用终端节点时可能遇到的一些常见错误的信息。本节适用于 PyTorch 1.4.0 或更高版本MXNet v1.7.0 或更高版本

  • 确保在 model_fn() 中完成对有效输入数据的第一个推理(预热推理),否则在调用 predict API 时,可能会在终端上看到以下错误消息:

    An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (0) from <users-sagemaker-endpoint> with message "Your invocation timed out while waiting for a response from container model. Review the latency metrics for each container in Amazon CloudWatch, resolve the issue, and try again."
  • 确保已设置下表中的环境变量。如果未设置,则可能会显示以下错误消息:

    在终端上:

    An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (503) from <users-sagemaker-endpoint> with message "{ "code": 503, "type": "InternalServerException", "message": "Prediction failed" } ".

    In CloudWatch:

    W-9001-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - AttributeError: 'NoneType' object has no attribute 'transform'
    密钥
    处于模式的 推理.py
    SAGEMAKER_SUBT_DIRECTORY /opt/ml/model/code
    可用性 (SAGEMAKER_CONTAINER_LOG_LEVEL) 20
    处于模式的 <your region>
  • 请确保在创建 MMS_DEFAULT_RESPONSE_TIMEOUT 模型时 Amazon SageMaker 环境变量设置为 500 或更高的值;否则,终端上可能会显示以下错误消息:

    An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (0) from <users-sagemaker-endpoint> with message "Your invocation timed out while waiting for a response from container model. Review the latency metrics for each container in Amazon CloudWatch, resolve the issue, and try again."