Saturday, 17 November 2018

Amazon Aurora Serverless Accessible in more Regions

Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora is a MySQL and PostgreSQL-compatible relational database created for the cloud, which merges the performance and availability of conventional enterprise databases with the simplicity and cost-effectiveness of open source databases where the database will automatically start up, shut down, and scale capacity up or down based on your application's requirements. It allows you to execute your database in the cloud without handling any database instances. It's a easy, cost-effective option for rare, irregular, or unsure workloads.

Amazon Aurora Serverless is now accessible in further 9 AWS Regions. With the additions of Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), US West (N.California), Canada (Central), EU (Frankfurt), EU (London), and EU (Paris), now you can pick the Serverless configuration of Amazon Aurora in 14 geographic Regions.

Amazon Cognito allows Centralized Logging for User Authentication

AWS says “Spend your time creating great apps. Let Amazon Cognito handle authentication.”

Amazon Cognito User Pools gives a secure user directory which ranges up-to hundreds of millions of users. Users can sign in through social identity suppliers like Google, Facebook, and Amazon, and by enterprise identity suppliers like Microsoft Active Directory through SAML. Amazon Cognito User Pools is a default Identity supplier which also assists IAM standards, like Oauth 2.0, SAML 2.0, and OpenID Connect. Amazon Cognito provides multi-factor authentication and encryption of data-at-rest and in-transit. Amazon Cognito is HIPAA eligible and PCI DSS, SOC, ISO/EIC27001, ISO/EIC/27017, ISO/EIC/27018, and ISO 9001 compliant. Amazon Cognito offers provision to handle access to back-end resources from your app. You can describe roles and map users to divergent roles so your app can access only the resources which are authorized for each user.

AWS Centralized Logging, is an answer which provisions the services mandatory to gather, analyze, and display logs on AWS over several accounts and AWS Regions. Now the answer leverages the scalability and security features of Amazon Cognito User Pools for Kibana dashboard user authentication, aids Amazon Elasticsearch Service (Amazon ES) version 6.3, containing the choice to encrypt Amazon ES data at rest. To retain more information on Centralized Logging, visit solution webpage.

Friday, 16 November 2018

Amazon Data Lifecycle Manager policies supported by AWS CloudFormation

AWS CloudFormation is a service which supports your model and set up your Amazon Web Services resources so that you can spend minimum time handling those resources and maximum time targetting on your applications that executes in AWS. You build a template that defines all the AWS resources which you need like Amazon EC2 instances or Amazon RDS DB instances, and AWS CloudFormation takes care of provisioning and configuring those resources for you. You don't require to separately build and configure AWS resources and think of what's dependent on what; AWS CloudFormation manages all of that.

Amazon Data Lifecycle Manager (DLM) lifecycle policies can now be used as a resource in your AWS CloudFormation templates, stacks, and stacksets. Amazon DLM offers an easy, automated process to backup data stored on Amazon EBS volumes. You can describe backup and retention schedules for EBS snapshots with DLM lifecycle policies. You can now create, edit, and delete lifecycle policies with AWS CloudFormation as part of your CloudFormation templates and incorporate lifecycle policies within your automated infrastructure deployments.

Read AWS CloudFormation user guide for DLM lifecycle policy as a resource in your templates, stacks, and stacksets. Learn more about Amazon DLM here.

AWS IoT Core Enhances the potential to Ingest Huge Amount of Device Data

Digital Transformation is all about future enhancement of Internet of Things (IoT). You can simply and securely handle ample of devices, run analytics and machine learning, and take actions to make better, quick decisions and these all can be done with the AWS IoT. AWS provides the entire portfolio, from edge to cloud, for both Industrial IoT (IIoT) and the Connected Home.

AWS IoT Core assists a new feature which permits AWS IoT Core customers to securely send huge amounts of data to over 10 AWS services like Kinesis and S3 via AWS IoT Rule Actions, without any extra messaging charges.

Basic Ingest optimizes data flow for high volume data ingestion workloads by removing the pub/sub Message Broker from the ingestion path. As a result, customers now have a more cost effective option to send device data to other AWS services while pursuing to advantage from all the security and data processing features of AWS IoT Core.

Basic Ingest is accessible in Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), China (Beijing), AWS GovCloud (US), US East (N Virginia), US East (Ohio), US West (Oregon),  operated by Sinnet, EU (Frankfurt), EU (Dublin), and EU (London) AWS regions.

AWS Launched Amazon Corretto

Amazon Corretto is a no-cost, multiplatform, production-ready distribution of the Open Java Development Kit (OpenJDK). Amazon already made multiple contributions to OpenJDK 8 and AWS look forward to work with the OpenJDK on future improvements to OpenJDK 8 and 11.

Amazon Corretto gives long-term assist which will contain performance improvements and security fixes. Amazon Corretto is certified as compatible with the Java SE standard and it executes inside on thousands of production services. You can build and execute Java applications on famous OS covering Amazon Linux 2, Windows, and macOS.

AWS downstream fixes made in OpenJDK, add improvements depends on our personal experience and requirements, and then create Corretto builds. AWS aim for Corretto to be the standard OpenJDK on Amazon Linux 2 in 2019. Amazon Corretto 8 is in Preview. You can download the Corretto 8 Preview from here.

Thursday, 15 November 2018

Now AWS CloudFormation perceives Drift Detection

Drift detection allows you to notice whether a stack's real configuration varies, or has drifted, from its expected configuration. AWS CloudFormation used to remark drift on whole stack, or on separate resources inside the stack. A resource is supposed to have drifted if any of its real property amounts vary from the expected property amounts. This contains if the property or resource has been perceived. A stack is considered to have drifted if one or more of its resources have drifted. AWS CloudFormation produces complete data on every resource in the stack that has drifted. CloudFormation detects drift on those resources that support drift detection. Resources which do not assist drift detection are allocated a drift status of NOT_CHECKED.

Now AWS CloudFormation enables you to notice if configuration modifications were made to your stack resources outside of CloudFormation through the AWS Management Console, CLI, and SDKs. Drift is the variance between the expected configuration values of stack resources described in CloudFormation templates and the real configuration values of these resources in the related CloudFormation stacks. Drift Detection permits you to preferable handle your CloudFormation stacks and certify stability in your resource configurations. To retain detailed knowledge on Drift detection, visit the AWS Blog.

Fine-Grained Access Control is supported by AWS Batch now

AWS Batch allows developers, scientists, and engineers to simply and efficiently execute hundreds of batch computing jobs on AWS to configure resources and schedule data analytics workloads. AWSBatch enables you to define execution parameters, job dependencies, and facilitates integration with a wide scale of famous batch computing workflow engines and languages like Pegasus WMS, Luigi, and AWS Step Functions. You can submit your code for your batch job by using AWS Management Console, CLIs, or SDKs with the help of AWS Batch. AWS Batch plans, schedules, and runs your batch computing workloads over the entire scale of AWS compute services and features, like Amazon EC2 and Spot Instances. AWS Batch gives default job queues and compute domain definitions which allows you to start rapidly. You do not require to install and handle batch computing software or server bunch which you utilize to execute your jobs, granting you to concentrate on analyzing results and solving problems.

Upgraded Identity and Access Management (IAM)-based fine-grained access controls is now supported by the AWS Batch. IAM-based controls allows administrators to link Portable Operating System Interface (POSIX) controls with their IAM users in AWS Batch. Besides POSIX support, administrators can write IAM policies which handle ingress to Job Definitions and Job Queues when submitting Jobs to AWS Batch. For more information on AWS Batch, click here.

Amazon Aurora Serverless Accessible in more Regions

Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora is a MySQL and PostgreSQL-compatible relational da...