Amazon SageMaker Inference calls will be now supported on the AWS PrivateLink which will make the calls more secure from the Internet. The customers can now make inference calls to the Amazon Machine Learning models which are hosted on the Amazon SageMaker within the Amazon Virtual Private Cloud with no need to go to the internet. To get the inference from the model, the machine learning models have to be deployed into the production with the Amazon SageMaker and then the client applications can use the Amazon SageMaker Runtime API to get inference calls. With this new feature, you can now make SageMaker Runtime API calls via an interface endpoint within the Virtual Private Cloud.
No comments:
Post a Comment