Linkedin

  • Home >
  • Deploy preprocessing logic into an ML model in a single endpoint using an inference pipeline in Amazon SageMaker

Deploy preprocessing logic into an ML model in a single endpoint using an inference pipeline in Amazon SageMaker

Project Overview

Project Detail

This pattern explains how to deploy multiple pipeline model objects in a single endpoint by using an inference pipeline in Amazon SageMaker. The pipeline model object represents different machine learning (ML) workflow stages, such as preprocessing, model inference, and postprocessing. To illustrate the deployment of serially connected pipeline model objects, this pattern shows you how to deploy a preprocessing Scikit-learn container and a regression model based on the linear learner algorithm built into SageMaker. The deployment is hosted behind a single endpoint in SageMaker.

https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-preprocessing-logic-into-an-ml-model-in-a-single-endpoint-using-an-inference-pipeline-in-amazon-sagemaker.html?did=pg_card&trk=pg_card

To know more about this project connect with us

Deploy preprocessing logic into an ML model in a single endpoint using an inference pipeline in Amazon SageMaker