Thursday, 31 August 2017

Amazon Elastic Cloud Compute System Manager can be utilized to report and take action on compliance



Amazon Web Service has announced that Amazon Elastic Cloud Compute can be utilized to take action and report on configuration compliance for Custom compliance types, State Manager and Patch Manager. Earlier you could only open patch compliance information for instances patching by using the Patch Manager.

With the latest release you now also see the configuration compliance information for instances that are based on the characterized state from a State Manager document and association. You can specify a State Manager Association that test for the presence of an application or particular firewall port setting and then you can run a report to identify if the instances are in compliance with the detailed configuration. You can also characterize custom configuration compliance types to know as to when you to report if the particular registry setting has been disabled. In single accounts Compliance reports are available. You can also check out the compliance reports cross-region and cross-account by deploying a resource data which is sync to Amazon Simple Storage Service (Amazon S3). You can also anticipate this data by using Amazon QuickSight and Amazon Athena. At last, you can auto-remediate the instances established on compliance reports.

If the Instances are out of compliance then it will trigger an Amazon CloudWatch Events rule so that it can bring into compliance. Amazon System Manages is now available in all GovCloud and AWS Commercial region.

Amazon Web Service Elastic Beanstalk now supports Windows .Net Core 2.0

Amazon Web Service has introduced that Elastic Beanstalk now supports Windows .NET Core 2.0. Applications can be deployed using the Windows .NET Core 2.0 on AWS Elastic Beanstalk with the EB CLI, AWS Toolkit for Visual Studio and AWS Management Console. You can now influence deployment to set up deployments including the capability to work multiple .Net Core 2.0 applications on the single Windows Server Environment in the AWS Elastic Beanstalk.

Amazon Web Service Integration App is now available for ServiceNow

Amazon Web Service Managed Service Integration App for ServiceNow available. It offers an extraordinary integration between AMS and ServiceNow. You just have to install the application and then the customers can connect with AWS Managed Services from ServiceNow without the customized API integration. Integration App can make it convenient for the organization users to handle their operational processes through a single interface and can get the entire view of their AMD resources in their organisation configuration management database (CMDB). The Amazon Web Service Managed Service Integration App is now available for ServiceNow’s Istanbul and Helsinki releases. Download the integration app from the ServiceNow store and then install it and sign in with your AWS account that is subscribed to AMS.






Wednesday, 30 August 2017

On Amazon Web Service you can launch rendering fleet with the help of Deadline 10



Amazon Web Service has now introduced with a new update that you can now render your fleet with Deadline 10. Deadline 10 is strong and convenient to utilize the render management system which is available now to all the Amazon Customers. With Deadline 10 it will enable the customer to freely access any combination of cloud-based or on-premises resources with new features and cost effective pricing for wide variable accessibility. Rendering fleet is a top distributed compute intensive process with reliance between the digital assets and render job. 

Thousands of client instantiations of the similar rendering process are a collection of Rendering Pipeline.  So Deadline is a pipeline manager that operates the distributed job. Deadline 10 consists of usage based licensing on Amazon Web Service with Deadline 10 for Maya, Arnold and Autodesk 3ds Max via the Thinkbox marketplace. Deadline10 intrinsically incorporates with Amazon Web Service through customers existing AWS accounts by enabling secure and simple expansion of on-prem render farms. It will naturally connect to the customers on prem farms by tagging AWS instances for synchronizing and tracking with the local assets servers on its own so that it can ensure all the specified files are transferred before the rendering begins. 

Deadline 10 has a flexible third party licensing option which will allow customers to purchase software licenses from the Thinkbox marketplace with AWS by setting or deploying the existing licenses or leveraging the combination of two. Deadline 10 is available now for usage based license customers where a new license is necessary for traditional floating license users. 

VMware Cloud on Amazon Web Service is available now in AWS US West Region (Oregon)

The introduction of VMware Cloud on Amazon Web Service brings VMware’s Software-Defined Data Center (SDCC) to the Amazon Web Service cloud. With this new release, you can operate your application on VMware vSphere based public, private and the hybrid cloud environments with the perfectly optimized access to the Amazon Web Service. VMware Cloud on Amazon Web Service is supported, delivered and sold by the VMware as the elastically-scalable and on demand cloud service that eradicates the barriers to cloud portability and cloud migration by increasing the IT efficiency which opens new opportunity to leverage a hybrid cloud environment. VMware Cloud on Amazon Web Service is available in AWS US West (Oregon) Region and slowly it will extend to other AWS regions in 2018.

Amazon has added improvements in the AWS account Sign In

Amazon Web Service has added some improvements in the Sign Procedure of the AWS account. The changes that have been made are that you can sign from the AWS Management Console’s homepage with the accounts’ root user or as the AWS Identity and Access Management (IAM) user. Earlier you had to sign in using the account-specific URL to sign in as the IAM user but with the latest update in the Sign-in method, the procedure is been simplified. You can also use the past sign in procedure with the account specific URL. Now with the first step in the new sign-in procedure will be that the root users have to enter their email address and the IAM users have to enter their account ID or the account alias. The Second step is that the root user has to enter the password and the IAM users will enter their user name and password. If Multi-Factor Authentication (MFA) is enabled on your account then you will be directed to enter the code from the MFA device. When you will complete all the procedure of AWS Sign-in, the homepage will be displayed after you are signed in the AWS Management console.

Tuesday, 29 August 2017

Amazon Elastic File System can be used to customize access permissions for shared directories

Amazon Web Service Elastic File System has come up with a new feature by supporting the Additional Permission on Directories with the use of Setgid. With this new release, the AWS customers can now customise their access permission in the shared directories on all the set of file system users. Files that are created in the directory will be connected to the group who are associated with it instead of the group getting associated in the directory where only when the particular user create the file when the set-gid permission is set. Sticky bit special permission on directories are used to restrict renaming and deletion of the files to the directory or to the owner or to the root user. Amazon Elastic File System now supports the running binary files which are set as execute only. With this new release, it enables you to set up the access permission for executable files so that they can only execute and not written or read.

Amazon Web Service Kinesis Firehose is now available in US East (Ohio) Regions, EU (Frankfurt) and Asia Pacific (Tokyo).

Amazon Web Service Kinesis Firehose is the convenient way to upload the streaming data in Amazon Web Service Cloud. Amazon Kinesis Firehose can record, transform and can also upload streaming data into Amazon Kinesis Analytics, Amazon Elasticsearch Service, Amazon RedShift, and Amazon S3. This will allow near real-time analytics with the existing business intelligence dashboards and tools that you are already using currently. Amazon Web Service Kinesis is now available in US East (Ohio) Regions, EU (Frankfurt) and Asia Pacific (Tokyo). To apply this feature just build a delivery stream in the console or research the Amazon Kinesis Firehose Developer Guide

AWS RDS for SQL Server has increased the Data Storage to 16 TB

AWS RDS for SQL has made their storage increase to 16 TB when earlier it was only 4TB.You have to use the Provisioned IOPS and General Purpose (SSD) storage types to avail the new storage limit. The Range of the IOPS to Storage (GB) Ratio is been modified to 10:1 to 50:1. This new update in the additional data storage and higher IOPS will aid in data warehouses to support larger workloads and allow transactional databases on single Amazon RDS instances without any need to share the data on multiple instances. This new feature that is added on AWS RDS for SQL Server is now available in all AWS Regions. 

Monday, 28 August 2017

HIPAA workloads on the AWS Cloud have been loaded with reference architecture that is deployed by Quick Start

The Amazon Quick Start is now setting up a model environment that will aid the organisation that has heavy workloads that fall within the range of the US Health Insurance Portability and Accountability Act (HIPAA) which consist of the workloads with the protected health information (PHI). The technical requirement that is charged by the HIPAA regulation is curtained by the Quick Start architecture maps. It deployed automatically by Amazon Web Service CloudFormation scripts and templates that create an example multi-tier by setting up Linux based web application in the Amazon Web Service Cloud in 30 minutes. The templates can be customised so that it can create repetitively, an auditable reference architecture that can meet the customer specific needs. The Quick Start is included with a guide for deployment that specifies the architecture in detail by providing step-by-step details for configuring, validating and deploying the Amazon Web Service Environment. Quick Start also has security control reference that links the Quick Start’s architecture components, configuration and the decision to particular HIPAA regulatory requirements.

You can easily share content on Amazon Workdocs by simply using the share a link

Amazon WorkDocs has made it convenient for the Amazon Customers to easily share the content with anyone by simply using the Share a Link. Click on the Share a Link on the Drop-Down Menu in the web client to share. With the new update, you share content by just emailing the link or by embedding the link to the webpage or the document. The link that you have shared on the Amazon WorkDocs can be made public or the access can be restricted to the AmazonDocs Site and you can keep a four digit password or expiration to the link or you can also disable at any time. The link can be viewed on the Amazon WorkDocs Activity Feed. The Site administrators control so that they can disable all the current public at any time and can also limit the users to create the links. In Amazon WorkDocs the Feature of Share a Link is available in all the AWS Regions where Amazon WorkDocs is available. 

Reputation Dashboard for Email Accounts is now included in Amazon Simple Email Service (SES)

Amazon Web Service has added Reputation Dashboard for Email accounts in Amazon Simple Email Service that will aid you to trail the overall complaint and bounce rates of the account. With this new update on Amazon SES, you can take an immediate action if there are issues or you can increase the performance which will create a growth on the email sending capabilities. This new feature is now available in EU (Ireland), US East (N. Virginia) and US West (Oregon) AWS Regions. To check out more information about the Reputation Dashboard you find Monitoring Your Sending Activity in the Amazon Simple Email Service Developer Guide. 

Thursday, 24 August 2017

Hulu chose Amazon Web Service as their Cloud Provider



Hulu is a live TV that is Leveraging AWS and have announced that they have selected AWS as its Cloud Provider and have influenced Amazon Web Service to publish it's new and over the Top live TV service.

Hulu found Amazon Web Service as a scalable, cost effective and efficient development of the growth of the company so that it can support 50 more channels for Hulu with Live TV Launch in May 2017. Hulu has selected Amazon Web Service so that they can deliver a great and pleasant viewer experiences even though if there is a lot of traffic in viewership.
Amazon Web Service will be providing Cloud Computing service which will allow Hulu to focus on the core part of the business by focusing mainly on the innovation of the OTT delivery for its highly personalized viewer experience rather than spending time managing the infrastructure. By doing this Hulu will be redefining the television experience for the Hulu experience and because of the live TV feature, it has become a necessity to decrease latency and increase the performance.



Rafael Soltanovich, Vice President of Software development at Hulu said that with the latest technology of Live TV it has become the necessity to create the best experience for the viewers. He also added that they have chosen Amazon Web Service as their cloud provider because of their variety of products and services. AWS offers elasticity, security, and agility that are needed foremost by Hulu Company which is the key to setting up new service. With the data centers that are provided by the AWS, it helps in storing the DVR storage, stream ingest and repackaging it has helped Hulu have scaled to higher availability with faster time to market.

Mike Clayville, Vice President of Worldwide Commercial Sales at Amazon Web Service said that upcoming leaders in entertainment and media like Hulu want a more Efficient and Optimized ways to create scalable OTT and streaming solutions. He also said that AWS helps such companies to procure such objective and aims easily where Hulu has made a smart decision in choosing Amazon Web Service to improve their performance and scalability.

Amazon Web Service Step Function is now available in EU (London) Region



Amazon Web Service Step Function makes it convenient to manage the various characteristics of micro services using the visual workflows and distributed applications. It helps you perform various functions by letting you change applications quickly and achieve scalable performance by building an application from the individual component. Amazon Step Function is the steady way to manage components and step through the function of the application. 

AWS Step Function is the part of the Amazon Web Service Serverless Platform which makes it easy to customize Amazon Web Service Lambda Functions for serverless applications.  Amazon Step Function can be used for micro services customization using the computing resources such as the Amazon EC2 Container Service and Amazon Elastic Cloud Compute. Amazon Step Function offers a graphical console to manage and visualize the component of the application as the series of steps which makes it easy to run and build multi-step applications. 

Amazon Step Function makes sure that your applications are executed in an orderly manner by automatically triggering and tracking each step and eradicates errors. Amazon Step Functions records each step so that when anything goes wrong it can be debugged and diagnosed problems quickly. The step can be added or changed without writing code so that it can conveniently innovate and evolve the applications faster. 

You have to pay only for the transition with Amazon Web Service Step Function according to the application workflow which is known as state transition. With the free tier includes 4000 state transition each month on Amazon Step Function. Amazon Step Function is now available in Asia Pacific (Sydney), US East (N. Virginia), Asia Pacific (Tokyo), US East (Ohio), EU (London), US West (Oregon), EU (Frankfurt) and EU (Ireland).

Amazon EC2 G3 Instances are now available in EU (Frankfurt) Region, Asia Pacific (Japan), Asia Pacific (Singapore) and Asia Pacific (Sydney) Region

Amazon Elastic Cloud Compute G3 instances are the upcoming generation of Amazon EC2 Antecedent Compute Instances that makes it convenient to gain a powerful and perfect combination of CPU, a Host memory, and GPU. It is also supported by NVIDIA Tesla M60 GPUs with G3 instances that provide double the CPU performance per GPU and it also doubles the host memory per GPU. If you compared to the GPU that is available today which delivers High-Performance Amazon EC2 G3 instance is providing double the power. With this new update, it makes G3 instances preferable for workloads such as 3D visualizations, Virtual Reality Applications, 3D Rendering, graphics-intensive remote workstations and video encoding. Amazon EC2 G3 Instance is now available in EU (Ireland), US East (Ohio), AWS GovCloud (US), US East (N. Virginia), US West (N. California) and US West (Oregon). Using the Amazon Web Service Management Console you can now launch G3 instances with AWS Command Line Interface (CLI), third-party libraries and AWS SDKs.

ORC and Grok File Formats are now supported by Amazon RedShift Spectrum

Amazon RedShift Spectrum can now be leveraged to query the data that is stored in Optimized Row Columnar (ORC) and Grok File formats. Amazon RedShift Spectrum is also supporting various other open file formats which include TSV, CSV, SeqenceFile, Avro, TextFile, Parquet, RegerSerDe and RCFile. In Amazon RedShift Database Developer Guide you can find more information for your referral by viewing the supported file formats. Amazon RedShift Spectrum is available in US East (Ohio), US West (Oregon) Regions and US East (N. Virginia). 

Wednesday, 23 August 2017

Amazon EC2 P2 instances are now available in Europe (Frankfurt) Regions, Asia Pacific (Singapore) and in Asia Pacific (Mumbai)

Amazon Web Service introduced an Amazon EC2 P2 instance that is now available in Europe (Frankfurt), Asia Pacific (Singapore) and in Asia Pacific (Mumbai) Regions. P2 instances are preferred for computing overloaded applications that need GPU coprocessors with deliverance of consistent high performance and massive horizontal floating point performance. It includes seismic analysis, computational fluid dynamics, molecular modelling, computational finance, rendering workloads and genomics. Amazon P2 instances provide up to 16 NVIDIA Tesla K80 GPUs, over 23 teraflops of double precision floating point performance, 192GB of total video memory, and 40,000 horizontal processing cores accommodating 70 teraflops of single precision floating point performance. 

Amazon DynamoDB Accelerator (DAX) can be provisioned using Amazon Web Service CloudFormation

Amazon announced you can now provision Amazon DynamoDB Accelerator (DAX) by using Amazon Web Service CloudFormation Service that will allow the customers to easily provide, update and manage a collection of related Amazon Web Service Resources. Amazon DynamoDB Accelerator is highly available, in-memory cache for Dynamo DB that is fully managed which delivers up to 10x efficient performance from microseconds to milliseconds even at the hundred of thousand requests per second. Amazon CloudFormation Templates can be updated, deleted and created DAX clusters, subnet groups and parameter groups. 

You can now deploy SQL Server on the Amazon Web Service Cloud

Amazon Web Service announced an update on SQL Server Quick Start launched to support SQL version 2016. Amazon Quick Start is integrated with a high availability solution. It is created with Microsoft Windows Server and SQL Server working on Amazon EC2 by using the Always on Availability Groups feature of SQL Server Enterprise Edition. Windows Server Failover Clustering (WSFC) can be set up with Quick Start to improvise the high availability and disaster recovery features in the Amazon Web Service Cloud. With this new update, a lot of features has been added such as SQL Server 2016 and license-included Amazon Machine Image (AMI) provisioned by Amazon, New R4 instance types, Customizable EBS volume types and adjustable IOPS for io1 and much more. Amazon Web Service CloudFormation templates can be utilized that consist with the Quick Start to set up SQL Server into an existing or new AWS infrastructure.

Tuesday, 22 August 2017

Amazon Elastic Compute Cloud System Manager can be used to generate Amazon Cloud Watch Events and more features is been added



Amazon Web Service has added new features that will let Amazon Elastic Compute Cloud System added New Amazon CloudWatch Events Target to trigger events. You can try to keep track of State Manager association changes and can also accept action type for Automation. Earlier, Amazon CloudWatch Events could not get triggered by System Manager State Manager. Amazon CloudWatch Events can now be used to receive notifications when the System Manager State Manager association instance or status changes.

If the association is in the failed state you can be notified and Amazon Web Service Lambda function can be used to stop the instance and can start a new one. System Manager Automation Workflow can now be triggered which will help you to quickly respond to the system events. If there is a change in the AWS environment or on fixed schedule Automation workflows can be triggered. System Manager State Manager changes in the association can be recorded. Before you could not go back and review older association setting when you edited the associations but now the association are versioned and can be named using the human-readable strings which allow seeing the trail of association changes. Rate-based scheduling can be performed that can be allowed to schedule associations more coarse-grained.

Systems Manager State Manager’s Approve is the new action type. The automation workflow you could not pause and you could request stakeholder approval or a manager but now you can utilize the approve action to request one or more principals for rejection or approval of a step. The principal can include Access Management and AWS Identity user or a role. Automation will continue to the next step when the required numbers of approvals are reached. Amazon System Manager is available in all the commercial regions and GovCloud.

Certification Authority Authorization (CAA) resource is now supported by Amazon Route 53

Amazon Web Service has now introduced support for Amazon Route 53 with Certification Authority Authorization (CAA) resource record sets that allow you to determine the certificate authorities that can provide certificates for the domains and sub-domains. Certification Authority Authorization (CAA) supports can carry Route 53 into compliance with the Certification Authority and Browser Forum (CA/Browser Forum) that requires the certificate authorities (CAs). They check for the presence of the DNS CAA record before delivering the certificate for the domain. For more information on Certification Authority Authorization in the Amazon Route 53 Developer Guide see CAA Format.

You can now receive alerts on AWS Budgets to monitor account



Amazon Web Service Budget has come with a new feature to let customers keep a customized budget and also receive alerts about their cost or usage if exceeds or the possibility of exceeding their budgeted account. Amazon Customers can now receive alerts and monitor their Reserved Instance (“RI”) when the expected budget and utilization are exceeding their limits.

Reserved Instance Utilization monitors the percentage of purchased Reserved Instance hours there were utilized by the matching instances. This can be measured on a daily, quarterly, Monthly or yearly basis. For example, Amazon customers can measure their budget and monitor Reserved Utilization at a large level with a monthly utilization of the customers or at the smaller sum level which is the daily utilization of m4.large instances that runs on the Linux Operating System at the specified Region.

So Amazon Customers can now determine up to five notifications per budget and each and every notification can be sent to ten email subscriber and can advertise to the Amazon Simple Notification Service (SNS) according to the topic of their choice. Reserved Instance Utilization alerts now only support Amazon EC2. 

If you want to get started with the Reserved Utilization alerting access the AWS Budget Dashboard or Search AWS Budgets Web page.

Monday, 21 August 2017

Amazon has come up with better security norms to keep up with the other cloud players

Amazon Web Service has introduced Amazon Macie to increase the security protocols that automatically alerts the Amazon Customers when there is a security threat on their sensitive data especially passwords. This is originally from the technology that is acquired by Amazon from a start up company, Harvest.ai at San Diego. It has a machine learning technology and expertise that can help in the betterment of the cloud computing strategy. No matter where you store your data it always comes up with a risk thus security protocols have to improve so that it becomes difficult for the hackers to Violet the security norms. Thus Amazon Macie can help in fast and easy identification of the security threats on the sensor such as passwords that are stored on Amazon Web Service.

Virtual Private Endpoints for Amazon DynamoDB is now available



Virtual Endpoints for DynamoDB allows all the network traffic between AWS DynamoDB and AWS Virtual Private Cloud to remain with the Amazon Web Service Cloud instead of transferring it to the public internet. Amazon DynamoDB provides data security and protection using the TLS endpoints for the encryption in transit, a fine grained access control, and a client side encryption library by using the Amazon Web Service Identity and Access Management (IAM) that provisions control at the attributes and item level. 

Virtual Private Cloud Endpoints for Amazon DynamoDB increases the efficiency of the privacy and the security most importantly for applications with the strict audit and compliance requirements or especially that handles sensitive data. Using VPC for DynamoDB is very easy and convenient because there is no additional cost applied with normal charges applied for NAT gateway access. You will not need Internet Gateway and NAT gateway that make sure that the VPC stays closed and away from the public internet. It provides a simplified network configuration where don't need to put any firewall. 

IAM Policies allow DynamoDB access via the VPC endpoints from the corporate network from the specific applications. Amazon VPC endpoints for AWS DynamoDB is available in all the public AWS Regions.

For Certain Workloads use AWS Batch instead of Lambda

Amazon Web Service Batch makes it convenient to automatically use EC2 instances and increase the capacity to provide AWS resources for heavy application workloads. Developers are presented with a challenge when there is a complex and large workload which needs a lot of expenditure of time in handling the resources when the performance of the application starts decreasing over time. Amazon Web Service Batch gives an opportunity for the developers to run a large number of batch computing jobs on AWS easily. Let’s learn how AWS Batch handles the load, first AWS Batch works on large jobs on Elastic Compute Cloud instances by using the Docker images and then it provides an alternative work that isn’t suited to AWS Lambda. Amazon Simple Queue and Amazon EC2 container are suitably comfortable with the AWS Batch Design.     

Saturday, 19 August 2017

Amazon Web Services Kinesis Streams has a Server-Side Encryption

cloud.in


As for how we can see that with the recent development with smart homes, IoT devices, big data, social networks, mobile phones, game consoles, and streaming data scenarios is the new trend. Amazon Kinesis Streams allows you to create a customized application that will record, process, analyze and store the terabytes of data per hour from the thousands of the streaming data sources.

You can build parallel processing system via Amazon Kinesis Streams because it allows the application to process data altogether from the same Amazon Kinesis Stream. For example, you can erase the processed data to Amazon S3 and can perform complicated analytics with Amazon redshirt and build strong and serverless streaming solutions using the Amazon Web Services Lambda.

Amazon Kinesis Stream allows you to stream several use cases for the consumers and now Amazon has made the service effective for protecting the data in motion by adding Server-side encryption (SSE) for Amazon Kinesis Streams. You can improve the security of the data, compliance requirements or meet any regulatory requirements of the organization data streaming needs with the new Amazon Kinesis Streams.


database
Kinesis Streams are one of the Amazon Web Services in Scope for the Payment Card Industry Data Security Standard (PCI DSS) compliance program. Payment Card Industry Data Security Standard (PCI DSS) compliance program is a recovery information security standard that is conducted by the PCI Security Standards Council that is established by the key financial institutions. 

PCI DSS compliance works with all the entities that process, store and transmit cardholder data or a sensitive authentication data which consist of service providers. Through Amazon Web Services Artifact you can request of the PCI DSS Attestation of the Responsibility and Compliance Summary. With the compliance, the good news is that with Amazon Kinesis Stream it doesn’t just stop there. FedRAMP is now compliant with Kinesis Stream in AWS GovCloud. 

FedRAMP is a Federal Risk and Authorization Management Program and also a US Government-wide program that provides a systematic approach to the security assessment, monitoring and continuous authorization for cloud services and products.

cloud computing


Data which is recorded and the partition key are put into the Kinesis Stream using the PutRecord API or the PutRecord is encrypted using the Amazon Web Services Management Service (KMS) master key. To add the encryption to the incoming data the Amazon Web Services Key Management Service (KMS) master key the Kinesis Streams utilizes the 256-bit Advanced Encryption Standard (AES-256 Algorithm).


With the use of Amazon Kinesis Management Console or the available AWS SDKs, it can allow the Server-side encryption with the Kinesis Stream for the existing streams or new streams. You can also audit or check the stream encryption that validates the encryption status of the particular stream in the Amazon Kinesis Console or can check if the PutRecord or GetRecord transaction is encrypted by using the Amazon Web Service CloudTrail Service.




About Cloud.in:

Cloud.in is an AWS Advanced Consulting Partner that delivers AWS managed services that cater every business need and aims in simplifying the AWS Cloud Journey. Our Team is the driving force behind Cloud.in with the experience, knowledge, and the skills that they behold in making cloud computing and AWS Cloud a pleasant experience. 

Amazon Web Service CodeStar now available in United States West (N.California) & EU Region (London)

Amazon Web Service CodeStar provisions an incorporated user interface which will allow you to easily handle your software development activities in one place with AWS CodeStar. You release the code faster and can deploy the entire continuous delivery tool chain in minimal time with Amazon Web Service CodeStar. You can track your progress on the entire software development process from the backlog of the work items and to the team's recent code deployed.

Amazon Kinesis Firehose has new feature available

Amazon Web Service announced one more new feature for Amazon Kinesis Firehose which is the convenient way to upload the streaming data in Amazon Web Service. Amazon has added a built in integration that will enable you to send the streaming data from the Amazon Kinesis Streams to Amazon Kinesis Firehose. You have to configure the stream as the data source to the Amazon Kinesis Firehose using the API or the Amazon console. These new features allow you to configure and create a Firehose delivery stream that automatically read the data from the Amazon Kinesis stream and then it delivers the data to the destination. It will be convenient for you to prevail the data in the stream to store the data such as Amazon RedShift, Amazon S3 and Amazon Elastic Service.  

Amazon Simple Email Service has announced the availability of dedicated IP Pools for email sending activities

AWS Simple Email Service announced that the function of creating groups of dedicated IP Pools for sending Email is now available. With this new feature with dedicated IP Pools, it will make it possible to determine which particular dedicated IP address to use for the particular email by enabling you to send the email from the specified IP addresses. Customers who need to send a big volume of email have one more dedicated IP address to send email from AWS Simple Email Service. If you have more than one dedicated IP addresses then you can group them to organize it better and it is called pools. You can set configuration settings for each pool so when the email is sent it will be sent from the IP addresses that consist of the pool.

Friday, 18 August 2017

Amazon Web Services Batch is now available in Singapore

Amazon Web Services has announced that AWS Batch is now available in Singapore. It allows the scientist, engineers, and developers to efficiently and easily run thousands of batch computing jobs on Amazon Web Services. AWS Batch provides the optimal types and quantity of computing resources which is based on the resource and volume requirements for the job submitted to the job queues. You don't have to install and manage any batch computing server clusters or software which ultimately enables you to focus mainly on analyzing workloads that run on Amazon EC2 and Spot instances.

Amazon Web Service Cognito is now available in the Asia Pacific Region (Singapore)

Amazon Web Service has announced that Amazon Cognito is now available in Asia Pacific Region (Singapore). AWS Cognito provision a fully managed user directory which can be customized UI for Sign in, built in integrations with Facebook, Google and Amazon for social sign in with SAML identity suppliers for corporate sign in. With this new feature, you can now create different roles and permission for users within the Amazon Cognito that secures the access to the back end resources. You can focus on the core part of creating the great app experience instead of spending the time in the building, scaling and securing the solution to be managed user management, authorization, sync across devices and authentication.

Amazon Web Service Directory is now available for Microsoft Active Directory in the US West (N.California) Region

Amazon Web Service Microsoft AD is an AWS Directory Service for Microsoft Active Directory which is now available in the US West (N.California) Region. Amazon Web Service Microsoft AD allows the Active Directory to be aware of the workloads and AWS resources can use managed Active Directory in the Amazon Web Service Cloud. AWS Microsoft AD is developed on the original Microsoft Active Directory and it does not need to synchronize or duplicate the data from the existing Active Directory to the Cloud. 

Thursday, 17 August 2017

Amazon Game lift’s Matchmaking services have got more additional features to it

Amazon Web Service GameLift’s Flex Match now allows you match player together based on the rules that you define. FlexMatch’s is powerful but simple rules language makes it convenient for anyone to quickly create strong player matchmaking whether it is based on building own matchmaking based on the player skill and latency or any other criteria. FlexMatch is used to configure the rules that set the group together and the same time managing the player wait time and the quality. It also offers analytics dashboard that shows the matchmaking performance with metrics that the track player demands for the matches, success and failure rates, player wait time and more. 

Amazon DynamoDB now is integrated with VPC Endpoints is now available in the market

VPC EndPoint is now available on DynamoDB which allows the AWS customers to have a network traffic between Amazon DynamoDB and Amazon Virtual Private Cloud which consist within the AWS cloud instead of spanning all over the public internet. DynamoDB is offering a data protection and security by using the TLS endpoints for encryption in transit. It is a client side encryption library and a fine grained access control by using the AWS Identity and Access Management (IAM) that provides control at the attribute and item level. With this new development, it improves the security and privacy of the application with audit requirements and strict compliance to handle the sensitive data. There is no additional cost for this feature. You will not need an internet gateway or NAT gateway which will ensure that it is far away from the public internet. It offers simplified network configuration, therefore, there is no need to set up a firewall. You can also customize the IAM policies to avail DynamoDB access via VPC endpoints from your corporate network and also only from the particular applications.

AWS EC2 System Manager has released System Manager Automation, AWS Step Function and AWS Lambda Maintenance Window

Amazon Web Services have announced that Amazon EC2 System Manager has added new supports for System Manager Automation, AWS Lambda maintenance window task types and AWS Step functions which will allow you to schedule complicated workflows on all your AWS resources. Before, in a maintenance window, you had to schedule discrete action with System Manager Run Command. With the new release, you can now carry out the complicated workflows in the maintenance window by programming a System Manager Automation Document, AWS Lambda Function and AWS Step Function. Using a System Manager Automation document it will allow you to use cases such patching an SQL server or you can take Amazon Elastic Block Store snapshots regular of the attached EBS volumes. Custom workflows or pre-defined workflows can be programmed on such task types.

    

Wednesday, 16 August 2017

Amazon Virtual Private Cloud now allows Amazon Customers to recover EIPs

Amazon Web Service Virtual Private Cloud now allows the Customer to recover their EIPs which must have been accidentally released to the public. The EIPs which is released can be recovered only when it is not assigned to other customers. But the Amazon customers have to keep one thing in mind that the sooner they recover the EIPs the better the chances will be of retrieval. EIPs can be recovered through CLI by using the command of allocating the address and specifying the IP address by using the address parameter. 

Amazon Web Service Glue Data works with Amazon RedShift

Amazon Web Services have introduced that now you can use AWS Glue Data Catalog as the metadata for the central data for Amazon RedShift Spectrum. Amazon Web Service Glue Data Catalog supplies a central storage metadata for all of the data assets even though if the location is in different regions. Amazon Athena Internal Data Catalog is used with Amazon Redshift Spectrum so it is suggested to upgrade it to AWS Glue Data Catalog. Amazon Web Service Glue Data help to find data sources schemes and you also crowd your AWS Glue Data Catalog with new and partitions definitions and modified table to maintain schema versioning. AWS Glue a fully managed ETL has the ability to transform the data or convert it to columnar formats to make performance efficient and make it cost-effective.   

Amazon Web Services announced a new security tool “Amazon Macie”

AWS has announced a new data security service called “Amazon Macie”. The prime function of the tool is to alert customers regarding security threats towards any sensitive data.
The tool draws technology from Harvest.ai, a previously acquired start-up by Amazon and its concepts are combined with the expertise of the AWS technology.

It notifies customers regarding threats to their passwords with AWS, rates any risks to all the stored data & alerts any unusual access patterns leading towards security threat.


AWS uses artificial intelligence in Amazon Macie accurate to keep more than 70 data types of the customer secured.  It is estimated to arrive within a few months as AWS is also looking to add an Al-enabled text translation service.

Monday, 14 August 2017

Amazon Web Services have proved that why standard lead the Technology Platforms



Amazon Web Services is an accomplished member of the container standards body of the Cloud Native Computing Foundation which represents a compelling milestone. As Google, Microsoft, IBM, Red Hat and many other companies that are providing cloud computing services matters in this space, Amazon Web Services have learned that when it comes to container management system, the standards matter.

Amazon Web Services is doing very well in rendering Cloud Computing Services and being that big in the market it can afford to go their own way at any time. But that isn’t the case with the Container management tool which belongs to Kubernetes. Kubernetes is an open source container management tool which is developed inside Google.

Amazon Web Service became agile in knowing that Kubernetes is an industry standard itself and the cost analysis source is going open. This makes AWS realize that the battle is already been fought and won. When Amazon knew that Google is in dominance with container management system they tried their next step by joining CNCF to comply with the container standards that the entire industry is following. Amazon made a good game plan by switching instead of fighting.

Container Standard Management System has become a huge talk among the large companies and that also for good reason. The container has replaced “virtual machines” because of its unique feature and flexible reasons. Container management system allows the user to perform and view transactions anywhere at any time. Software containers simply packs the code and sends it to the desired location that can be run be anywhere instead of sending the whole operating system or the software. Your application can be breakdown into smaller chunks that can make updates process easier.

Anyone can build their own tools on the common basis for managing containers where Google has built Kubernetes, Microsoft has built Azure Container Service and Red Hat has built OpenShift. Everyone builds a similar set of basic service and then they customize it in their own way.

It is been observed that the technology reaches its higher success when opted to standard as for how it has helped World Wide Web. There is a standard way of building websites and when the companies agrees with then everything falls into the right place. Amazon Web Services has come to know that how standard matters to them. 




AWS Marketplace announced that Cost and Usage data is now available in AWS Cost Explorer



Amazon Web Service Marketplace is an assistant software catalog which avails customers to immediately deploy 3800 third party and above; Amazon Machine Image, Desktop Solution across 35 product categories and Software as a Service. 

With this new release, the Amazon Web Service customers will get the opportunity to control and analyze their cost and the usage through Amazon Web Service Cost Explorer, Amazon Web Service Cost and Usage Report and Amazon Web Service Budgets. Amazon Cost Explorer generates a default report that allows the customer to track and manage their general spending patterns of the total cost or for the specific service.

This will help the customers to dig deep down to their usage patterns by using CUR and then you can set the budget on the Amazon Web Service marketplace usage and cost. Amazon Web Service Cost Explorer can make your cloud computing service more cost effective and durable. 


AWS customers data can now be deployed into the Amazon Web Services GovCloud (US) Region



Amazon Web Services Marketplace has made it possible for the customers to subscribe and discover the software that will support the day to day workloads from the Amazon Web Service Marketplace to Amazon Web Service GovCloud (US). 

Amazon Web Service GovCloud (US) is been made to be a confidential Amazon Web Service region so the customers who have the U.S federal, state and local government data can store sensitive data and regulate workloads in the Amazon Web Service cloud. 

The pricing level in Amazon Web Service Marketplace is simple with a variety of pricing models which consist of pay-as-you-go monthly and annual subscriptions terms. There are many products in Amazon Web Service that offer free trial which allow the Government Clouds to understand the value of the product and then buy. 

Amazon Web Service Government Cloud is now supported by Amazon Web Service of Bring Your Own License feature that can be easily to migrate and centralize existing software applications and license.


Through Amazon Web Services you can now try out Deadline 10 to render management system



Amazon Web Services Customers can use Deadline 10 to deliver management system in public preview. You can access any sequence of cloud-based resources or on-premises for the image/video delivering jobs. 

Rendering management system is an eminently distributed compute accelerated process with reliance between digital assets and render job. With the same rendering process, thousand of client instantiations is comprised with rendering pipeline. Deadline is a pipeline manager that manages the distributed job. By tapping into the AWS resources the customer can use the unlimited option of the third-party usage based licensing to grow to render farm easily and efficiently. 

Deadline 10 inherently accommodates with Amazon Web Services through customers current accounts. On Prem farms it automatically connects to customers by tagging AWS instances for synchronizing and tracking the local asset servers automatically to make sure that the specific files are been transferred before delivering them. 

Deadline 10 customers with flexible third party licensing benefits they use AWS resources to purchase the software license from the Think-box Marketplace by setting up the current licenses or leverage the combination of two. 

Friday, 11 August 2017

Amazon Web Services added new features in EC2 System Manager; You can now Visualize and Query the instance Software Inventory



With the rapid development of the business sector, it has now become crucial to find the right tools to manage time, work and home systematically and easily. So with the increasing growth in technology, it has become very important to find the right management tools so that you can easily handle the systems. In 2016 at the re invent, conference AWS had introduced Amazon EC2 Manager to provide assistance with the management of the software and the systems.

Amazon EC2 System Manager is a management service that will allow you to collect software inventory, configure both Linux and Windows operating systems, Create system images, and also can apply Operating System patches. This will allow secure and remote administrations for Hybrid environments with on-premise machines set up for System Managers or EC2 instances. You can regulate and record the software set up the procedure of these instances using the AWS configuration with EC2 Service Capability.

Amazon Web Services have recently added another feature to the inventories of the EC2 Systems Manager to help you to record the metadata of your application deployments, S3 sync, OS, and System Configuration. You can aggregate the recorded inventory data automatically from the instance in multiple account and different regions and then it is stored in Amazon S3 with the help of S3 Sync for EC2 System Manager. When you store your data in Amazon S3 you can use Amazon Athena to run queries against the instance inventory and can also use Amazon QuickSight to analyze and visualize the software inventory of your instances.

Now let’s learn as to how to optimistically use Amazon S3 Sync with Amazon QuickSight and Amazon Athena to visualize and query the software inventory of instances. First, make sure that Amazon EC2 System Manager Prerequisites are completed, installation of the SSM Agent on the managed instances and the configuration of the roles and permissions in Access Management and AWS Identity.

1.     First, launch a new EC2 instance for the System Manager. After your instance is launched you then have to install the SSM Agent. It is important that the IAM user account should have administrator access in the VPC where the instance will be launched. You can also create a separate IAM user account for the EC2 System Manager instance.

2.    Installing an SSM Agent you will need to SSH into the instance and create a temporary directory. Install all the required SSM Agent software for Amazon Linux EC2 instance. On Windows instance, the SSM Agent is already launched so there is no need to install.

3.    System Manager Agent is now running on your instance you will need to use an S3 Bucket to record the inventory data. Create an S3 bucket to record the inventory data from the instance. Bucket policy will help to ensure that the EC2 System Manager has the permission to write to the bucket. To add the bucket policy you simply need to select the Permission tab in the S3 console and then select the Bucket Policy tab. Selecting the bucket policy will allow the system manager to check bucket permission and add an article to the bucket. After specifying the policy the S3 bucket is now ready to accept the instance inventory data.

4.    Go to EC2 console to configure the inventory collection using the bucket by selecting Managed Resources under the Systems Manager Shared Resources category and select the Set Inventory button. Now select the EC2 instance that was created earlier from where you can record the inventory data. You can also choose multiple instances to record the inventory data if preferred.

5.    After this just scroll down to the Schedule section where you can choose the time interval to know set the time on how much time you want the inventory metadata to be assembled from the instance. Wait for the confirmation dialogue that notes that the inventory is been deployed successfully and then click the close button to go to the main EC2 console.

6.   Click the Resource Data Sync button in the EC2 console to deploy the Resource Data sync by implementing S3 bucket for the managed instance. Enter the Sync Name, Bucket Name, Bucket Prefix and Bucket Region and then click the create button.

7.    Go to S3 bucket after few minutes just to check if that the instance inventory data is syncing to the S3 bucket. When the data is synced directly to the S3 you can then take advantage of the querying capabilities of the Amazon Athena service to display and query the instance inventory data. Create a folder in the bucket. After doing that you can then create a database for recording and querying that data which is sent from SSM to the bucket by typing CREATE DATABASE SQL Statement in the Athena Editor and selecting the Run Query Button.

8.    When the database is created, you can then create a table to capture the inventory application data from the Amazon S3 bucket synced to the System Manager Resource Data Sync.

9.    When you get the query success note, you can then run the MSCK REPAIR table command to aggregate the create table. After doing this you can run the query data against the inventory data that is synced from the EC2 System Manager to the Amazon S3 bucket. After creating the query data now you can also use Amazon QuickSight to visualize your data.

10.    Create the Amazon QuickSight account and go to the dashboard and choose the Manage Data button. Choose the New Dataset button. You can now create a data set from the Athena table that has the System Manager Inventory data by choosing Athena as the data source.

11.    Choose Visualize to create the data set and then analyze the data in the Athena Table. ApplicationType field to the graph can help you to build the visualization using the following data.


Amazon Web Services Cognito has launched some new features that will give an advantage to the developers

Amazon Web Services have launched the feature of general availability for Amazon Cognito user Pools that will allow application developers to customize and add the sign-in and sign-up user experience, it can integrate with Google, Facebook, log in with Amazon and SAML-based identity providers. Amazon Cognito user Pools provides the sign in and sign up built in facilities and you can use it to your app with few lines of the code and SDK. AWS Cognito User Pools is a great opportunity for the developers who want to increase their performance and create a great experience for the AWS customers. You can quickly accommodate login with Amazon with just few line of code.

Amazon Web Services CodeBuild supports Atlassian Bitbucket Cloud

Through AWS Codebuild you can now test and build the source code that is hosted in Git repositories on Atlassian Bitbucket cloud. With this new release, you can integrate Codebuild with four systems to stock the source code which consists of AWS Code Commit, GitHub and Amazon S3. AWS CodeBuild is a managed build service that assembles runs test, source code and generated software packages that are ready to be set up. You don't have to manage, scale and provision the build servers with AWS CodeBuild.  

Data Key Caching now gets support by AWS Encryption SDK

Data Key Caching allows you to reuse the data key that secures your data instead of creating a new data key for each encrypted operation. Data Key caching will give you the advantage you helping you to stay within the service limits of the application scales, it improves performance and minimizes cost. AWS Encryption SDK is a library of encryption that will make it easy for you to apply the encryption best practices to the applications. This will allow you to focus more on the core part of the functionality of the application instead of figuring on how to encrypt and decrypt the data. You should use this feature cautiously because Cryptographic discourages excess use of data keys. Use it only when it is required to increase the performance goals.

Thursday, 10 August 2017

AWS Simple Email Service announced an Open and Click metrics that will help for tracking customer engagement

Amazon Web Service has now introduced new features in Amazon Simple Email Service that will give you the ability to trace the open and click events. You can also publish emails that will send events to Amazon Simple Notification Service. This will help the companies and the AWS customers to keep a track of the rate flow of the recipients are engaging with your emails that they receive from your site. Amazon SES will benefit the users to optimize and refine their emailing strategy and programs. Amazon SNS is an event publishing destination that gives permission to the Amazon SES customers to issue metrics related to email sending such as opens, complaints, click, deliveries, rejections, and bounces.

Lumberyard Beta 1.10 is now available on Amazon

Lumberyard Beta 1.10 is now available in Amazon’s free AAA game engine which is said to be the biggest release so far in Amazon Game engine. Lumberyard Beta 1.10 has 545 features, augmentation and fixes to increase the performance and the speed. Some of the features that are mentioned here are Order-Independent Transparency, Temporal Anti-Aliasing, Component entity Workflows, Cloud Gems, Docking system, material editor and brand new cinematic features. Amazon has modified, replaced and ripped at least half of the code base so that the performance deliverance becomes more efficient. 

Amazon Cloud Directory adds new feature to improve the performance of your directory searches

Amazon Cloud Directory will allow you to escalate your directory searched by searching the subset of the directory by using the method of schema facet. With this new feature when you are searching the entire user's details in the directory instead of that you can search only the subset of the directory by searching the user set who contains an employee facet. A facet is a collection of features by characterizing in a schema. A facet can give you an advantage of creating different classes of an article by extending the schema. Cloud Directory can help you build adaptable, cloud native directories for arranging hierarchies of the data with multiple dimensions. You have to pay any additional charge to use facets in Cloud Directory searches.

Wednesday, 9 August 2017

Amazon Web Service Snowball and Amazon Web Services Snowball Edge is now available in South America Region (Sao Paulo)

Amazon Web Services has now made Amazon Snowball and Amazon Snowball Edge available in South America Region (Sao Paulo). Amazon Web Service Snowball is a data transport that expedites moving terabytes to petabytes of data into and out of the Amazon Web Services using the physical repository that is designed to be secure for physical transport. AWS snowball eradicates the difficulties that are faced during transportation of large data transfer which includes high cost, security concerns and long transfer times. 

Amazon Web Services X-Ray SDK adds support for Python

AWS X-Ray allows the developers to visualize, analyze and debug distributed applications and fundamental services in production. Amazon Web Services X-Ray has added support for Python Developers. AWS X-Ray SDK will allow the Python Developers to emit and record information to the AWS X-Ray Service from within their applications. You may have to download a zip file which encloses the SDK file or you can start in minutes using PIP. The supported web libraries and frameworks include boto3, boto-core, request, django>+1.10, sqlite3 and MySQL-connector.

In Asia Pacific Singapore and Sydney; Amazon Quicksight is now available

Amazon Web Service has made Amazon QuickSight available in Asia Pacific (Sydney) and Asia Pacific (Singapore). Amazon Web Services new users and customers in Asia Pacific (Sydney) and Asia Pacific (Singapore) can now sign up for Amazon QuickSight as their home region by making the quantity of SPICE available in the region and supplying the proximity to AWS and also the on premises data sources. The Existing AWS users can enable this availability in this particular region by switching the user region in the user interface to provide SPICE Capacity. This will enable cheaper and faster connectivity to the data source in the region. 

Tuesday, 8 August 2017

On Amazon S3 UK Met Office's High-Resolution Weather forecast data is now available

The accurate and precise Weather forecast is needed to help the farmers know when to plant the crops; governments can plan for transportation hazards and it can also help airlines know when to fly. It is very useful in many other ways. Thus UK Met Office’s High-Resolution Weather is now available on Amazon S3. The repository data from the UK Met Office Global and Regional Ensemble Prediction system is now on Amazon S3. It is a global weather forecast which will notify about the rapid storm development, fog, rain, snow and the wind.

Lessons Learned from a Failed Cloud Migration Project

For the majority of enterprises, cloud migration is now a matter of when rather than if. The promise of cost-effectiveness, scalability, and...