If you have followed instructions in Deploy a Model, you should have a SageMaker AI endpoint set up and running. Regardless of how you deployed your Neo-compiled model, there are three ways you can submit inference requests:
Request Inferences from a Deployed Service (Amazon SageMaker SDK)
Request Inferences from a Deployed Service (Boto3)
Request Inferences from a Deployed Service (Amazon CLI)
Javascript is disabled or is unavailable in your browser.
To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions.