Wednesday, 4 July 2018

Amazon SageMaker Inference calls are more secured from the Internet will now support on AWS PrivateLink

Amazon SageMaker Inference calls will be now supported on the AWS PrivateLink which will make the calls more secure from the Internet. The customers can now make inference calls to the Amazon Machine Learning models which are hosted on the Amazon SageMaker within the Amazon Virtual Private Cloud with no need to go to the internet. To get the inference from the model, the machine learning models have to be deployed into the production with the Amazon SageMaker and then the client applications can use the Amazon SageMaker Runtime API to get inference calls. With this new feature, you can now make SageMaker Runtime API calls via an interface endpoint within the Virtual Private Cloud. 

No comments:

Post a Comment

Yes, Cloud Cost Optimization Is Real and It’s Saving Big Bucks

It all started with a short message in the team chat: “ Hey… why is our cloud bill twice as high this month? ” Raj, a DevOps engineer at a f...