Saturday, 17 August 2019

Now Amazon Athena helps querying data in Amazon S3 Requester Pays buckets

Amazon Athena is an interactive query service which makes it simple to examine data straight in Amazon Simple Storage Service (Amazon S3) with the help of standard SQL. You can point Athena at your data stored in Amazon S3 using few clicks in the AWS Management Console and start utilizing standard SQL to execute ad-hoc queries and obtain results within seconds. There is no infrastructure to set up or manage as Athena is serverless, so you just charge for the queries you execute. Athena scales automatically—running queries simultaneously —so results are fast, even with large datasets and complex queries. Now Amazon Athena helps querying data in Amazon S3 Requester Pays buckets. Using this new launched feature, Athena workgroup administrators can configure workgroup settings to permit members to reference S3 Requester Pays buckets in queries. Once configured, the requester, rather than the bucket owner, pays for the Amazon S3 request and data transfer charges related to the query. Refer Create a Workgroup from Amazon Athena User Guide to know how to configure this setting for your workgroup. And to read further on Requester Pays buckets, click Requester Pays Buckets in the Amazon Simple Storage Service Developer Guide.

Friday, 16 August 2019

Amazon Kinesis Data Firehose is obtainable in the Asia Pacific (Hong Kong) AWS Region

Amazon Kinesis Data Firehose makes it simple to precisely load streaming data into data lakes, data stores and analytics tools. This helps to capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, allowing near real-time analytics with current business intelligence tools and dashboards you’re already using today. It is a completely organized service which automatically scales to match the throughput of your data and needs no continuous administration. Further it can batch, compress, transform, and encrypt the data before loading it, reducing the amount of storage used at the destination and maximizing security. Now this Amazon Kinesis Data Firehose is accessible in the Asia Pacific (Hong Kong). You don't require to write applications or manage resources with Amazon Kinesis Data Firehose. Additionally, you can also configure Amazon Kinesis Data Firehose to transform your data before delivering it. You can create a delivery stream in the Amazon Kinesis Console. To get more information about Amazon Kinesis Data Firehose, refer documentation. To get the complete list of Amazon Kinesis Data Firehose availability, refer to the AWS Region Table.

Wednesday, 14 August 2019

Amazon Elasticsearch Services offers assistance for Elasticsearch versions 6.8 and 7.1

Amazon Elasticsearch Service (Amazon ES) is a service which makes it simple to deploy, operate, and scale Elasticsearch clusters in the AWS Cloud. Elasticsearch is a well-known open-source search and analytics engine for use cases like log analytics, real-time application monitoring, and clickstream analysis. You can directly access the Elasticsearch APIs; current code and applications work smoothly with the service using Amazon ES. Amazon ES provisions every resources for your Elasticsearch cluster and initiates it. Further, it automatically finds and replaces failed Elasticsearch nodes, lessening the overhead related with self-managed infrastructures. Now Amazon Elasticsearch Service assists open source Elasticsearch versions 6.8 and 7.1 and their corresponding Kibana versions. Elasticsearch 6.8 is the last small 6.x release and is the advance way for 7.x. Elasticsearch 7.1 is a prime release. To give efficient and performant query processing, Elasticsearch 7.1 adds adaptive replica selection by default which automatically routes requests depends on load and performance of the available shards. Now you can design new domains functioning Elasticsearch 6.8 or 7.1 and further simply upgrades current 5.x and 6.x domains with no downtime with the help of in-place version upgrades. These new versions Amazon ES and Kibana on Amazon ES are accessible over 21 regions globally, refer AWS Region Table to get the Amazon Elasticsearch Service availability.

Tuesday, 13 August 2019

Nested Workflows Offered By AWS Step Functions

AWS Step Functions allows you collaborate several AWS services into serverless workflows so that you can create and update apps faster. You can build and execute workflows which frame jointly services with the help of AWS Step Functions like AWS Lambda and Amazon ECS into feature-rich applications. Workflows are consists of a sequence of steps, with the output of one step functioning as input into the next. Now AWS Step Functions permits you to organize intricate methods by composing modular, reusable workflows. As companies utilize workflows very largely, their workflows get complex to create, check, and modify. Frequent workflow methods are usually repeated in different locations. With nesting your Step Functions workflows, you can create greater, very tangle workflows out of smaller, simpler workflows. AWS Step Functions lets you to exchange and restructure workflow modules excluding customizing code. This permits you to build new workflows from your current workflows in minutes. You can start a nested workflow execution from just one workflow step. You can configure your workflow to launch a nested workflow with the help of code snippets in the console. No extra cost for initiating a nested workflow. Nested workflows are charged same as each Step Functions workflows, refer pricing. Nested workflows are accessible in every AWS public regions where Step Functions is obtainable, visit AWS Regions to get list.

Monday, 12 August 2019

AWS Glue offers FindMatches ML transform to remove duplicate data and search equivalent records in your dataset

AWS Glue is ETL service (extract, transform, and load) which is easy and cost-effective to classify your data, clean it, enrich it, and move it reliably between various data stores. AWS Glue is serverless, so there’s no infrastructure to set up or manage. AWS Glue includes a central metadata repository called as the AWS Glue Data Catalog, an ETL engine that automatically creates Python or Scala code, and a flexible scheduler which manages dependency resolution, job monitoring, and retries. Now AWS Glue can use to search equivalent records over a dataset with the help of new FindMatches ML Transform. FindMatches ML Transform is a custom machine learning transformation which aids you find equivalent records. By connecting the FindMatches transformation to your Glue ETL jobs, you can search connected products, places, suppliers, customers, and more. Besides, you can use this to remove duplicate data like to find customers who have signed up more than once, products that have been added inadvertently to your product catalog more than once, and so forth. You can instruct the FindMatches ML Transform your definition of a “duplicate” via examples, and it will utilize machine learning to search other possible duplicates in your dataset. This new feature AWS Glue ML Transforms is currently accessible in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) AWS regions.

Friday, 9 August 2019

Now Amazon Aurora Multi-Master is widely accessible

Amazon Aurora is completely organized by Amazon RDS, that automates lengthy administration activities such as hardware provisioning, database setup, patching, and backups. Amazon Aurora (Aurora) is a relational database engine which is compatible with MySQL and PostgreSQL. Amazon Aurora is up to 5X faster than standard MySQL databases and 3X faster than standard PostgreSQL databases. Also it contains a high-performance storage subsystem. Now Amazon Aurora Multi-Master is widely accessible, permitting you to generate several read-write instances of your Aurora database over different Availability Zones, that allows uptime-sensitive applications to get constant write accessibility via instance failure. In the instance failure or Availability Zone failure case, Aurora Multi-Master allows the Aurora database to retain read and write accessibility with zero application downtime. There is no necessity for database failovers to restart write operations with Aurora Multi-Master. To know how to create extremely accessible MySQL applications with the help of Amazon Aurora Multi-Master, read this blog. This feature is obtainable on Aurora MySQL 5.6 in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) AWS Region. Refer this documentation to get detailed information on Aurora Muti-Master. With the few click in Amazon RDS Management Console or download the latest AWS SDK or CLI, you can create Amazon Aurora Multi-Master.

Thursday, 8 August 2019

AWS Fargate Obtainable in Asia Pacific (Hong Kong) Region

AWS Fargate is Amazon ECS compute engine which enables you to execute containers excluding the need to control servers or clusters. You do not have to provision, configure, and scale clusters of virtual machines to execute containers with AWS Fargate. This erase the necessity to select server types, determine when to scale your clusters, or optimize cluster packing. Also AWS Fargate discards the requirement for you to connect with or think about servers or clusters. AWS Fargate allows you concentrate on planning and creating your applications rather than handling the infrastructure which executes them. AWS Fargate smoothly integrates with Amazon ECS. You only describe your application like you do for Amazon ECS. Now this AWS Fargate is accessible in Hong Kong AWS Region of Asia Pacific. There are no upfront charges for AWS Fargate, you only charge for the resources which you utilize. To know the pricing details of AWS Fargate, refer Pricing Details. And to get the complete list of AWS Region wherer AWS Fargate is accessible, refer Region table.

Now Amazon Athena helps querying data in Amazon S3 Requester Pays buckets

Amazon Athena is an interactive query service which makes it simple to examine data straight in Amazon Simple Storage Service (Amazon S3)...