Monday, 22 July 2019

Amazon SageMaker Batch Transform Allows Forecast Results

Amazon SageMaker Batch Transform allows you to execute forecasts on datasets saved in Amazon S3. It is perfect for situations where you are functioning with sizeable batches of data and do not require sub-second latency. Now you can now configure your Batch Transform Jobs to remove particular data attributes from forecast requests, and to connect some or all of the input data attributes with forecast results. As an outcome, you no further require extra pre-processing or post-processing when executing batch forecasts on data which is in CSV or JSON format. Take an example, suppose a dataset which contains three attributes: ID, age, and height. The ID attribute is indiscriminately created or sequential number which brings no signal for the ML problem and was not utilized when teaching the ML model. You can configure your Batch Transform jobs to remove the ID attribute from all record, and send only the age and height attributes in the forecast requests sent to the model and also to link the ID attribute with the forecast results in the last S3 output of the job. Keeping record-level attributes in this manner can be helpful for examining the forecast outcomes. This new feature is accessible in each region where Amazon SageMaker is accessible. To get the complete list of AWS Region where Amazon SageMaker is available, refer AWS Region Table. To get further detail about this feature, read Amazon SageMaker

No comments:

Post a Comment

Now Amazon Athena helps querying data in Amazon S3 Requester Pays buckets

Amazon Athena is an interactive query service which makes it simple to examine data straight in Amazon Simple Storage Service (Amazon S3)...