Version v0.7 of the documentation is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the latest version.
Seldon Serving
Serve a model using Seldon
Seldon comes installed with Kubeflow. Full documentation for running Seldon inference is provided within the Seldon documentation site.
If you have a saved model in a PersistentVolume (PV), Google Cloud Storage bucket or Amazon S3 Storage you can use one of the prepackaged model servers provided by Seldon.
Seldon also provides language specific model wrappers to wrap your inference code for it to run in Seldon.
Kubeflow Specifics
- By default Seldon is configured to use the istio Gateway
kubeflow-gateway
and will add Virtual Services for the Seldon resources you create which expose Seldon paths to the Kubeflow istio gateway.
Examples
Seldon provides a large set of example notebooks showing how to run inference code for a wide range of machine learning toolkits.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.