深度学习 AMI
开发人员指南
AWS 文档中描述的 AWS 服务或功能可能因区域而异。要查看适用于中国区域的差异,请参阅中国的 AWS 服务入门

GPU 训练

本部分针对在 GPU 集群上的训练。

有关 AWS Deep Learning Containers的完整列表,请参阅Deep Learning Containers映像

注意

MKL 用户:读取 AWS Deep Learning Containers MKL 建议以获得最佳训练或推理性能。

MXNet

本教程将指导您在单节点 GPU 集群上使用 MXNet 进行训练。

  1. 为您的集群创建 pod 文件。pod 文件将提供有关集群应运行的内容的说明。此 pod 文件将下载 MXNet 存储库并运行 MNIST 示例。打开 vi 或 vim 并复制和粘贴以下内容。将此文件另存为 mxnet.yaml

    apiVersion: v1 kind: Pod metadata: name: mxnet-training spec: restartPolicy: OnFailure containers: - name: mxnet-training image: 763104351884.dkr.ecr.us-east-1.amazonaws.com/mxnet-training:1.4.1-gpu-py36-cu100-ubuntu16.04 command: ["/bin/sh","-c"] args: ["git clone -b v1.4.x https://github.com/apache/incubator-mxnet.git && python ./incubator-mxnet/example/image-classification/train_mnist.py"]
  2. 使用 kubectl 将 pod 文件分配到集群。

    $ kubectl create -f mxnet.yaml
  3. 您应看到以下输出:

    pod/mxnet-training created
  4. 检查状态。任务“tensorflow-training”的名称位于 tf.yaml 文件中。它现在将显示在状态中。如果您正在运行任何其他测试或之前已运行某些内容,它将显示在此列表中。多次运行此项,直到您看到状态更改为“Running (正在运行)”。

    $ kubectl get pods

    您应看到以下输出:

    NAME READY STATUS RESTARTS AGE mxnet-training 0/1 Running 8 19m
  5. 检查日志以查看训练输出。

    $ kubectl logs mxnet-training

    您应该可以看到类似于如下输出的内容:

    Cloning into 'incubator-mxnet'... INFO:root:Epoch[0] Batch [0-100] Speed: 18437.78 samples/sec accuracy=0.777228 INFO:root:Epoch[0] Batch [100-200] Speed: 16814.68 samples/sec accuracy=0.907188 INFO:root:Epoch[0] Batch [200-300] Speed: 18855.48 samples/sec accuracy=0.926719 INFO:root:Epoch[0] Batch [300-400] Speed: 20260.84 samples/sec accuracy=0.938438 INFO:root:Epoch[0] Batch [400-500] Speed: 9062.62 samples/sec accuracy=0.938594 INFO:root:Epoch[0] Batch [500-600] Speed: 10467.17 samples/sec accuracy=0.945000 INFO:root:Epoch[0] Batch [600-700] Speed: 11082.03 samples/sec accuracy=0.954219 INFO:root:Epoch[0] Batch [700-800] Speed: 11505.02 samples/sec accuracy=0.956875 INFO:root:Epoch[0] Batch [800-900] Speed: 9072.26 samples/sec accuracy=0.955781 INFO:root:Epoch[0] Train-accuracy=0.923424 ...
  6. 您可以检查日志以观察训练进度。您还可以继续检查“get pods”以刷新状态。当状态变为“Completed (已完成)”时,您将知道该训练任务已完成。

TensorFlow

本教程将指导您在单节点 GPU 集群上训练 TensorFlow 模型。

  1. 为您的集群创建 pod 文件。pod 文件将提供有关集群应运行的内容的说明。此 pod 文件将下载 Keras 并运行 Keras 示例。此示例伤脑筋 TensorFlow 后端。打开 vi 或 vim 并复制和粘贴以下内容。将此文件另存为 tf.yaml

    apiVersion: v1 kind: Pod metadata: name: tensorflow-training spec: restartPolicy: OnFailure containers: - name: tensorflow-training image: 763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-training:1.14.0-gpu-py36-cu100-ubuntu16.04 command: ["/bin/sh","-c"] args: ["git clone https://github.com/fchollet/keras.git && python /keras/examples/mnist_cnn.py"] resources: limits: nvidia.com/gpu: 1
  2. 使用 kubectl 将 pod 文件分配到集群。

    $ kubectl create -f tf.yaml
  3. 您应看到以下输出:

    pod/tensorflow-training created
  4. 检查状态。任务“tensorflow-training”的名称位于 tf.yaml 文件中。它现在将显示在状态中。如果您正在运行任何其他测试或之前已运行某些内容,它将显示在此列表中。多次运行此项,直到您看到状态更改为“Running (正在运行)”。

    $ kubectl get pods

    您应看到以下输出:

    NAME READY STATUS RESTARTS AGE tensorflow-training 0/1 Running 8 19m
  5. 检查日志以查看训练输出。

    $ kubectl logs tensorflow-training

    您应该可以看到类似于如下输出的内容:

    Cloning into 'keras'... Using TensorFlow backend. Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz 8192/11490434 [..............................] - ETA: 0s 6479872/11490434 [===============>..............] - ETA: 0s 8740864/11490434 [=====================>........] - ETA: 0s 11493376/11490434 [==============================] - 0s 0us/step x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples Train on 60000 samples, validate on 10000 samples Epoch 1/12 2019-03-19 01:52:33.863598: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2019-03-19 01:52:33.867616: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance. 128/60000 [..............................] - ETA: 10:43 - loss: 2.3076 - acc: 0.0625 256/60000 [..............................] - ETA: 5:59 - loss: 2.2528 - acc: 0.1445 384/60000 [..............................] - ETA: 4:24 - loss: 2.2183 - acc: 0.1875 512/60000 [..............................] - ETA: 3:35 - loss: 2.1652 - acc: 0.1953 640/60000 [..............................] - ETA: 3:05 - loss: 2.1078 - acc: 0.2422 ...
  6. 您可以检查日志以观察训练进度。您还可以继续检查“get pods”以刷新状态。当状态变为“Completed (已完成)”时,您将知道该训练任务已完成。

PyTorch

本教程将指导您在单节点 GPU 集群上使用 PyTorch 进行训练。

  1. 为您的集群创建 pod 文件。pod 文件将提供有关集群应运行的内容的说明。此 Pod 文件将下载 PyTorch 存储库并运行 MNIST 示例。打开 vi 或 vim,然后复制并粘贴以下内容。将此文件另存为 pytorch.yaml

    apiVersion: v1 kind: Pod metadata: name: pytorch-training spec: restartPolicy: OnFailure containers: - name: pytorch-training image: 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-training:1.2.0-gpu-py36-cu100-ubuntu16.04 command: - "/bin/sh" - "-c" args: - "git clone https://github.com/pytorch/examples.git && python examples/mnist/main.py --no-cuda" env: - name: OMP_NUM_THREADS value: "36" - name: KMP_AFFINITY value: "granularity=fine,verbose,compact,1,0" - name: KMP_BLOCKTIME value: "1"
  2. 使用 kubectl 将 pod 文件分配到集群。

    $ kubectl create -f pytorch.yaml
  3. 您应看到以下输出:

    pod/pytorch-training created
  4. 检查状态。作业“pytorch-training”的名称位于 pytorch.yaml 文件中。它现在将显示在状态中。如果您正在运行任何其他测试或之前已运行某些内容,它将显示在此列表中。多次运行此项,直到您看到状态更改为“Running (正在运行)”。

    $ kubectl get pods

    您应看到以下输出:

    NAME READY STATUS RESTARTS AGE pytorch-training 0/1 Running 8 19m
  5. 检查日志以查看训练输出。

    $ kubectl logs pytorch-training

    您应该可以看到类似于如下输出的内容:

    Cloning into 'examples'... Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ../data/MNIST/raw/train-images-idx3-ubyte.gz 9920512it [00:00, 40133996.38it/s] Extracting ../data/MNIST/raw/train-images-idx3-ubyte.gz to ../data/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ../data/MNIST/raw/train-labels-idx1-ubyte.gz Extracting ../data/MNIST/raw/train-labels-idx1-ubyte.gz to ../data/MNIST/raw 32768it [00:00, 831315.84it/s] Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ../data/MNIST/raw/t10k-images-idx3-ubyte.gz 1654784it [00:00, 13019129.43it/s] Extracting ../data/MNIST/raw/t10k-images-idx3-ubyte.gz to ../data/MNIST/raw Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ../data/MNIST/raw/t10k-labels-idx1-ubyte.gz 8192it [00:00, 337197.38it/s] Extracting ../data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ../data/MNIST/raw Processing... Done! Train Epoch: 1 [0/60000 (0%)] Loss: 2.300039 Train Epoch: 1 [640/60000 (1%)] Loss: 2.213470 Train Epoch: 1 [1280/60000 (2%)] Loss: 2.170460 Train Epoch: 1 [1920/60000 (3%)] Loss: 2.076699 Train Epoch: 1 [2560/60000 (4%)] Loss: 1.868078 Train Epoch: 1 [3200/60000 (5%)] Loss: 1.414199 Train Epoch: 1 [3840/60000 (6%)] Loss: 1.000870
  6. 您可以检查日志以观察训练进度。您还可以继续检查“get pods”以刷新状态。当状态变为“Completed (已完成)”时,您将知道该训练任务已完成。

使用完集群后,请参阅 EKS 清除以获取有关清除集群的信息。

本页内容: