Wednesday 4 July 2018

Amazon SageMaker Inference calls are more secured from the Internet will now support on AWS PrivateLink

Amazon SageMaker Inference calls will be now supported on the AWS PrivateLink which will make the calls more secure from the Internet. The customers can now make inference calls to the Amazon Machine Learning models which are hosted on the Amazon SageMaker within the Amazon Virtual Private Cloud with no need to go to the internet. To get the inference from the model, the machine learning models have to be deployed into the production with the Amazon SageMaker and then the client applications can use the Amazon SageMaker Runtime API to get inference calls. With this new feature, you can now make SageMaker Runtime API calls via an interface endpoint within the Virtual Private Cloud. 

No comments:

Post a Comment

Empower Your Generative AI Innovation with Amazon Bedrock

  In the dynamic world of cloud computing, AWS has consistently set benchmarks with its innovative services and solutions. One of the inter...