Custom models in Neptune ML - Amazon Neptune
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Custom models in Neptune ML

Neptune ML lets you define your own custom model implementations using Python. You can train and deploy custom models using Neptune ML infrastructure very much as you do for the built-in models, and use them to obtain predictions through graph queries.


Real-time inductive inference is not currently supported for custom models.

You can start implementing a custom model of your own in Python by following the Neptune ML toolkit examples, and by using the model components provided in the Neptune ML toolkit. The following sections provide more details.