Use CNTK for Inference with an ONNX Model - Deep Learning AMI
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Use CNTK for Inference with an ONNX Model

Note

We no longer include the CNTK, Caffe, Caffe2 and Theano Conda environments in the Amazon Deep Learning AMI starting with the v28 release. Previous releases of the Amazon Deep Learning AMI that contain these environments will continue to be available. However, we will only provide updates to these environments if there are security fixes published by the open source community for these frameworks.

Note

The VGG-16 model used in this tutorial consumes a large amount of memory. When selecting your Amazon Deep Learning AMI instance, you may need an instance with more than 30 GB of RAM.

How to Use an ONNX Model for Inference with CNTK
    • (Option for Python 3) - Activate the Python 3 CNTK environment:

      $ source activate cntk_p36
    • (Option for Python 2) - Activate the Python 2 CNTK environment:

      $ source activate cntk_p27
  1. The remaining steps assume you are using the cntk_p36 environment.

  2. Create a new file with your text editor, and use the following program in a script to open ONNX format file in CNTK.

    import cntk as C # Import the Chainer model into CNTK via the CNTK import API z = C.Function.load("vgg16.onnx", device=C.device.cpu(), format=C.ModelFormat.ONNX) print("Loaded vgg16.onnx!")

    After you run this script, CNTK will have loaded the model.

  3. You may also try running inference with CNTK. First, download a picture of a husky.

    $ curl -O https://upload.wikimedia.org/wikipedia/commons/b/b5/Siberian_Husky_bi-eyed_Flickr.jpg
  4. Next, download a list of classes will work with this model.

    $ curl -O https://gist.githubusercontent.com/yrevar/6135f1bd8dcf2e0cc683/raw/d133d61a09d7e5a3b36b8c111a8dd5c4b5d560ee/imagenet1000_clsid_to_human.pkl
  5. Edit the previously created script to have the following content. This new version will use the image of the husky, get a prediction result, then look this up in the file of classes, returning a prediction result.

    import cntk as C import numpy as np from PIL import Image from IPython.core.display import display import pickle # Import the model into CNTK via the CNTK import API z = C.Function.load("vgg16.onnx", device=C.device.cpu(), format=C.ModelFormat.ONNX) print("Loaded vgg16.onnx!") img = Image.open("Siberian_Husky_bi-eyed_Flickr.jpg") img = img.resize((224,224)) rgb_img = np.asarray(img, dtype=np.float32) - 128 bgr_img = rgb_img[..., [2,1,0]] img_data = np.ascontiguousarray(np.rollaxis(bgr_img,2)) predictions = np.squeeze(z.eval({z.arguments[0]:[img_data]})) top_class = np.argmax(predictions) print(top_class) labels_dict = pickle.load(open("imagenet1000_clsid_to_human.pkl", "rb")) print(labels_dict[top_class])
  6. Then run the script, and you should see a result as follows:

    248 Eskimo dog, husky