Thursday, 30 November 2017

Amazon Web Service Greengrass now adds support for Machine learning Inference

Amazon Web Service Greengrass Machine Learning Inference makes it convenient to operate ML inference on AWS Greengrass devices utilizing the models that are trained and built in the cloud. Earlier, training and building Machine Learning models and performing ML inference was done almost particularly in the cloud. With AWS Greengrass Machine Learning Inference the AWS Greengrass devices can deliver smart decision quickly as the data is being built even though if they are terminated. This new support makes it easy to deploy ML which ultimately makes the process simple and convenient. 

No comments:

Post a Comment

Optimizing Performance and Cost: Migrating an Express.js Application from EC2 to AWS Lambda

Introduction: In a recent project, our team worked on optimizing a Node.js application that was originally hosted on an EC2 instance. The ap...