Wednesday, 4 July 2018

Amazon SageMaker Inference calls are more secured from the Internet will now support on AWS PrivateLink

Amazon SageMaker Inference calls will be now supported on the AWS PrivateLink which will make the calls more secure from the Internet. The customers can now make inference calls to the Amazon Machine Learning models which are hosted on the Amazon SageMaker within the Amazon Virtual Private Cloud with no need to go to the internet. To get the inference from the model, the machine learning models have to be deployed into the production with the Amazon SageMaker and then the client applications can use the Amazon SageMaker Runtime API to get inference calls. With this new feature, you can now make SageMaker Runtime API calls via an interface endpoint within the Virtual Private Cloud. 

No comments:

Post a Comment

AI-Driven Cloud Optimization: Automated Cloud Optimization Reducing Waste and Maximizing Efficiency

In the dynamic landscape of cloud computing, businesses continually face the challenge of balancing performance and costs. As cloud infrastr...