Deploy custom fine-tuned models from Amazon S3 and Amazon FSx using kubectl
The following steps show you how to deploy models stored on Amazon S3 or Amazon FSx to a Amazon SageMaker HyperPod cluster using kubectl.
The following instructions contain code cells and commands designed to run in a terminal. Ensure you have configured your environment with Amazon credentials before executing these commands.
Prerequisites
Before you begin, verify that you've:
-
Set up inference capabilities on your Amazon SageMaker HyperPod clusters. For more information, see Setting up your HyperPod clusters for model deployment.
-
Installed kubectl
utility and configured jq in your terminal.
Setup and configuration
Replace all placeholder values with your actual resource identifiers.
-
Select your Region in your environment.
export REGION=<region>
-
Initialize your cluster name. This identifies the HyperPod cluster where your model will be deployed.
Note
Check with your cluster admin to ensure permissions are granted for this role or user. You can run
!aws sts get-caller-identity --query "Arn"
to check which role or user you are using in your terminal.# Specify your hyperpod cluster name here HYPERPOD_CLUSTER_NAME="<Hyperpod_cluster_name>" # NOTE: For sample deployment, we use g5.8xlarge for deepseek-r1 1.5b model which has sufficient memory and GPU instance_type="ml.g5.8xlarge"
-
Initialize your cluster namespace. Your cluster admin should've already created a hyperpod-inference service account in your namespace.
cluster_namespace="<namespace>"
-
Create a CRD using one of the following options:
Deploy your model from Amazon S3 or Amazon FSx
-
Get the Amazon EKS cluster name from the HyperPod cluster ARN for kubectl authentication.
export EKS_CLUSTER_NAME=$(aws --region $REGION sagemaker describe-cluster --cluster-name $HYPERPOD_CLUSTER_NAME \ --query 'Orchestrator.Eks.ClusterArn' --output text | \ cut -d'/' -f2) aws eks update-kubeconfig --name $EKS_CLUSTER_NAME --region $REGION
-
Deploy your InferenceEndpointConfig model with one of the following options:
Verify the status of your deployment
-
Check if the model successfully deployed.
kubectl describe InferenceEndpointConfig $SAGEMAKER_ENDPOINT_NAME -n $CLUSTER_NAMESPACE
-
Check that the endpoint is successfully created.
kubectl describe SageMakerEndpointRegistration $SAGEMAKER_ENDPOINT_NAME -n $CLUSTER_NAMESPACE
-
Test the deployed endpoint to verify it's working correctly. This step confirms that your model is successfully deployed and can process inference requests.
aws sagemaker-runtime invoke-endpoint \ --endpoint-name $SAGEMAKER_ENDPOINT_NAME \ --content-type "application/json" \ --body '{"inputs": "What is AWS SageMaker?"}' \ --region $REGION \ --cli-binary-format raw-in-base64-out \ /dev/stdout
Manage your deployment
When you're finished testing your deployment, use the following commands to clean up your resources.
Note
Verify that you no longer need the deployed model or stored data before proceeding.
Clean up your resources
-
Delete the inference deployment and associated Kubernetes resources. This stops the running model containers and removes the SageMaker endpoint.
kubectl delete inferenceendpointconfig $SAGEMAKER_ENDPOINT_NAME -n $CLUSTER_NAMESPACE
-
Verify the cleanup was done successfully.
# # Check that Kubernetes resources are removed kubectl get pods,svc,deployment,InferenceEndpointConfig,sagemakerendpointregistration -n $CLUSTER_NAMESPACE
# Verify SageMaker endpoint is deleted (should return error or empty) aws sagemaker describe-endpoint --endpoint-name $SAGEMAKER_ENDPOINT_NAME --region $REGION
Troubleshooting
Use these debugging commands if your deployment isn't working as expected.
-
Check the Kubernetes deployment status.
kubectl describe deployment $SAGEMAKER_ENDPOINT_NAME -n $CLUSTER_NAMESPACE
-
Check the InferenceEndpointConfig status to see the high-level deployment state and any configuration issues.
kubectl describe InferenceEndpointConfig $SAGEMAKER_ENDPOINT_NAME -n $CLUSTER_NAMESPACE
-
Check status of all Kubernetes objects. Get a comprehensive view of all related Kubernetes resources in your namespace. This gives you a quick overview of what's running and what might be missing.
kubectl get pods,svc,deployment,InferenceEndpointConfig,sagemakerendpointregistration -n $CLUSTER_NAMESPACE