Saturday, 30 June 2018

Why is it important to tag AWS Resources?

aws tag


As we know Amazon Web Services offer pay as you go method and other services at reasonable cost to encourage cost-efficient cloud computing experience. There are many organizations that have testified about the benefits of AWS Cloud service with its cost-saving pricing model. But even though AWS offers ways to cut down your cost on data storage and applications, you can also save money by other means. 

Today we will be talking about how you can take full advantage of Amazon Web Services  tagging service to cut down on cost by understanding the essentials of tagging AWS resources. 

The significance of using Tagging AWS resources:-

AWS Cloud offers flexibility through its services where you can scale the cost and performance up and down according to the organization requirement which is a cost-saving factor.

A tag includes an optional value and customer-defined key which is a metadata that the user can implement to multiple or individual resource for means of tracking and management. Now for an example, you can schedule the non-essential Elastic cloud Compute instance to pause during the off hours or when it is not in use, this can save cost up to 70 percent which is a pretty good number. With the ability to add a tag to the non-essential instance would make it easy to search and filter the critical instances that have to remain active from those instances that can be safely scheduled for a shutdown. So a Tag enables users to group, designate and separate resources according to their requirement or preference. 

In the Elastic Cloud compute Schedule process tag plays an important role. Automation is one of the many reasons that tagging can be implemented. AWS Tags can be implemented for security concern, resource classification, and cost tracking but in fact, there is no limit to how the user can implement the AWS tags. Overlapping also won’t be an issue because a single AWS resource can have 50 tags.  

Guide on Tagging AWS Resources:-

You can customize the AWS tags however you want according to your requirements thus this gives optimum flexibility in organizing the AWS environment. Now having such freedom can be beneficial and also a risk factor because it is good to have a lot of flexibility and options but if the there is a lack of pre-generated tags or categories then the user have to start from scratch. This can be hugely time-consuming. 

AWS Tags can only be beneficial if they are used strategically which is relevant to the system. If the tags are used properly then you can save up a lot on cost and also it will bring more flexibility and scalability. But if they are not managed properly then it will just bring confusion and create a mess.  

aws tagging strategy


You can follow the points mentioned below to create a general tagging strategy:-

  • Implement standardized and case-sensitive format for tags and use it on all resource types.
  • Use tag dimensions that will help in managing the resource access control, automation, organization and cost tracking.
  • Automated tools can be highly beneficial to manage the resource tags. Resource groups tagging API controls the tag automatically which makes it convenient to automatically filter search and manage tags and resources. It also further simplifies the backups of tag data on all the supported service with a single API call per AWS regions.
  • It is recommended to use less tags instead of blasting with usage of too many tags.
  • Modifying tags is easy according to the changing requirements but there are also consequences of implementing such changes so it is better to be ready to face the consequences especially in the area of access control, upstream billing reports, and automation. 


AWS Management console:

Tags can be used to see only the relevant data on the AWS management console instead of viewing the entire AWS services.

AWS Cost Explorer:

AWS tags can be applied to the AWS Cost explorer to see the full detailed view of only those specific resources the tags is implemented to.

Automation:

Tags can be used in EC2 to specify tags to only those instances that have to remain active and others can be safely scheduled for shutdown. Tags can also be implemented for Relational Database instances. Tags can be utilized for automating the specified data to be back up and other out of data information can be deleted such as old snapshots.

Control access:

You can designate specific roles to the AWS users using the AWS tags. This will help in designating users the access to only specified environments. 

About Cloud.in:

Cloud.in is an AWS Advanced Consulting Partner that delivers AWS managed services that cater every business need and aims in simplifying the AWS Cloud Journey. Our Team is the driving force behind Cloud.in with the experience, knowledge, and the skills that they behold in making cloud computing and AWS Cloud a pleasant experience. 

Amazon Transcribe now adds support to Amazon CloudWatch and AWS CloudTrail Events

Amazon Transcribe enables customers to add speech to their text which provides the ability to customize according to their needs. With the latest development the Amazon Transcribe API calls are now tracked with the AWS CloudTrail Events and also you can use the Amazon CloudWatch events to monitor the API call logs in real time. With this integration it further simplifies security analysis; troubleshooting and resource change tracking the transcription applications as well as monitor the application health and performance. Amazon CloudTail will record all the request made to the Amazon Transcribe and Amazon CloudWatch Events delivers a near real time stream of the system events. 

Amazon Dynamo Database backup and Restore is now available in EU - Paris Region

With Amazon Dynamo DB Backup and Restore you can do a full backup of the Amazon DynamoDB table for data archiving and retention which will help you meet the governmental and corporate regulatory requirements. Amazon DynamoDB backup and restore is now available in EU Paris Region. Point in time recovery offers continuous backups of the Dynamo Database table data which will protect your data against accidental deletes or writes. You can get started with Amazon DynamoDB backup and restore with just a single click in the AWS Management Console, with the Command Line Interface and a simple API call. 

Amazon MQ added four new broker instances for high scalability

Amazon MQ is a message broker service that has now added support for four new M5 broker instances that will allow the users to scale the brokers to meet high throughput requirements. Amazon MQ makes it easy to operate and configure message brokers in the cloud. With this new update, Amazon MQ supports 6 new instances. The earlier supported instance was mq.t2.micro and mq.m4.large and the new instances are mq.m5.large, mq.m5.xlarge, mq.m5.2xlarge, and mq.m5.4xlarge. M5 instances are available in the regions where Amazon MQ is available and AWS Regions are US West – N. Virginia, N. California, Ohio and Oregon, EU- Frankfurt and Ireland, Asia Pacific- Sydney. 

Friday, 29 June 2018

With the latest support you can now set the Amazon Macie with the permission to access resources

Amazon Macie has now added support for the Service linked roles by helping the customers to appoint permissions to the desired member so that they can access the resources in other services on their behalf. Service Linked roles help the customers meet the auditing requirements and help monitor all the actions performed which is then recorded and stored in the AWS CloudTrail logs. The Amazon Macie service linked role which is also called AWSServiceRoleforAmazonMacies has all the required permissions that are required by the Macie to access other AWS resources and services. 

With the new update you can now access logs from the AWS Elastic Beanstalk console

As AWS Elastic Beanstalk Console supports Application Load Balancer logging you can now access logs from the application load balancer that helps in troubleshooting the issues and evaluating the traffic patterns from the AWS Elastic Beanstalk console. The access logs record the detailed information about the requests that is send to the Application load balancer. The logs contain information such as request paths, client’s IP address, server response and latencies. Access logs are disabled by default so to enable that you need to select that offers that feature that enables the access logs and Points out he S3 bucket that you want to store the logs within the configuration load balancer pager in the Amazon Elastic Beanstalk console. 

Amazon Elastic Container Service for Kubernetes has got HIPAA eligibility

Amazon Elastic Container Service for Kubernetes enables the customers to easily scale, deploy and manage the containerized applications using the kubernetes on Amazon Web Services. Amazon Web Services announced that Amazon EKS is now HIPAA eligible. The customers who have an executed Business Associate Addendum with the Amazon Web Service you can then utilized the Amazon Elastic Container Service for Kubernetes process the encrypted Protected Health Information in Docker containers that are deployed on the cluster of Amazon Elastic Cloud Compute instances. You can visit the HIPAA Compliance page for more information about the HIPAA eligibility. 

Thursday, 28 June 2018

AWS Secrets Manager enables users to access secrets

Database Credentials and API keys on all AWS accounts can be accessed securely with the AWS Secrets Manager by adding the resourced based policies to secrets. AWS Secrets Manager enables users to easily retrieve, rotate and manage database credentials, API keys, and other secrets. Now with the new update, you can now utilize the secrets on all AWS accounts. This means that you can share the secrets with another business partner without sending it via an email or handwritten notes. The resource-based policies provisions control to the users to see who can manage the permission on AWS secrets. 

AWS Cost Explorer's Reserved Instances can also be accessed from Linked accounts with the new update

Based on the total cross-account Amazon Relational Database Service and Amazon Elastic Cloud Compute usage, the AWS Cost Explorer offers you Reserved Instances purchase recommendations. But now with the latest update, you can now access the custom reserved instances purchase recommendations via linked account through AWS Cost Explorer. AWS Cost explorer will evaluate the usage to detect the usage that can be considered for reserved instance and it also generates Reserved Instances purchase recommendation so to favorably cover the On-demand usage at the discounted Reserved instance rate that also included the automated detection instance family usage which can be covered by the Size Flexible Reserved Instances. 

Exclusion List is added to the Amazon Inspector that details errors to resolve issues

Amazon Inspector with the new update has now added Exclusion list that shows the list of security checks and instances that are not checked in the assessment run and offers guidance to resolve those issues. There are chances for the assessment runs to be failed or it might get completed with the errors so various reasons. So now with this new update, the customers can view the reasons and get guidance that can detect the issues and resolve them and then execute the assessment runs successfully. After you run the assessment you can then open the exclusions list and see the instances or rules packages that are not assessed and see the reasons and the guidance to resolve those issues. 

Wednesday, 27 June 2018

Guide to a successful AWS Disaster Recovery planning

Disaster recovery
Continuity of business is every business owner’s first priority and to fulfill that need, there are ways to ensure the safety and continuity of the business. In Cloud-based computing platform, Amazon Web Services is the preferred Cloud computing provider that offers the best service to enable faster disaster recovery solution.

Even though after having resourceful and decentralized attributes of the cloud if there is no proper utilization and not much flexibility then such capable cloud platform will be wasted.

Difference between Backups and Disaster Recovery:-

At the time of any unforeseen instances, it is always essential to have a recent back up of the company. Now regular backups are not always the right strategic planning when it comes to disaster recovery. Scheduling regular backups in the Amazon Elastic Cloud Compute and the attached Elastic Block storage will be not enough for a stronger disaster recovery plan. Back up is just a small component of one large process. If you are not able to access anything then what good will be the data then.

To have the best disaster recovery plan, you need to test your disaster recovery plan beforehand so that at the time of the disaster you are at confidence. A well-tested disaster recovery plan will provide access and restore the company’s data from the AWS Cloud environment quickly.

Cross Region Back-ups and analyzing critical data:-

As you know the importance of performing regular backups of the company data, it is thus essential to do a regular back of the Amazon EC2 resources which is a vital component of the disaster recovery planning. When you are creating a backup strategy it is important that you set a collection of mission-critical data and application and specify as to how you want the data to be stored as in AMIs, snapshots, etc.

You will need to prioritize what data and applications are most critical so that you only back up what is important. Now when you will be choosing a backup strategy the time and money also play a huge role so choose only those data that is most important so that you save time in storing only critical data and save money on the storage. Keeping backups in multiple locations will offer higher resilience because storing data in close geographic proximity will hamper the Disaster Recovery Plan when there is a large scale disaster that will leave zero options. Therefore adopting AWS leverages businesses because it has various AWS regions all over the world will give you the power in avoiding zero impact to your business.

Recovery Time Objective and Recovery Point Objective:-

Setting the company’s Recovery time objective and Recovery point objective is crucial. Recovery Time objective determines the length of the disaster recovery process without imposing any monetary loss. Recovery Point Objective is the amount of the data loss that is acceptable under some circumstances. You have to run the numbers to see what best suits for your organization so that you can set your time according to that.



Disaster Recovery Planning method:-

There are many methods for DR recovery method but the below method is the most common. You can choose your own Disaster Recovery method that will save your time and money according to your preference.

Backup and Restore:

Backup and Restore data is the most cost-effective method but if the data is not on the standby then this method can be time-consuming.

Pilot Light:

The critical data and applications are kept ready when there is any disaster so that it can be retrieved as soon as possible.

Warm Standby:

The company’s critical applications and data will be kept in a duplicated version in a warm standby with a little downtime and smooth migration.

Hot Standby:

Hot Standby copies the company’s critical applications and data between two or more active locations and distributes the traffic or the usage. So even though if disaster strikes then it will reroute to the safe area so you don’t have to suffer any downtime. But this method is indeed expensive because of running two separate environments at the same time incurs additional cost.

Once you have made a Disaster Recovery Plan it is better to test it so that you know its strength and weaknesses. You can create a duplicate production environment so that you can test your disaster recovery plan.

About Cloud.in:

Cloud.in is an AWS Advanced Consulting Partner that delivers AWS managed services that cater every business need and aims in simplifying the AWS Cloud Journey. Our Team is the driving force behind Cloud.in with the experience, knowledge, and the skills that they behold in making cloud computing and AWS Cloud a pleasant experience. 


Amazon Web Service now introduces Amazon Linux Workspaces

Amazon WorkSpace is a secure cloud desktop service offered by Amazon Web Service. Amazon Workspace can now be provisioned on Linux desktops on Amazon Linux 2 as announced by AWS with the latest update. Amazon Workspaces customers can now either choose Windows 7, Windows 10 or Amazon Linux 2 desktop as per their preferred operating system. Amazon Linux Workspace is now available with the basic Linux development tools including the AWS SDKs and Eclipse IDE that makes testing, building and deploying code to the AWS Cloud easily. With the organization facing difficulties in managing the different hardware and models with limited option are now relived with the flexibility offered by AWS to deploy a secure cloud desktop environment with Windows or Linux. 

Amazon Dynamo Database Accelerator (DAX) SDK for Go is now made available

The new Amazon DynamoDB Accelerator (DAX) SDK for Go now allows customers to enable the microsecond read performance for the Amazon DynamoDB table in the application which is written in the Go programming language. DAX fastens up the reads from the DynamoDB tables with up to 10 xs which can take up time from milliseconds to microsecond with the capacity to take at millions of request per second. The data population and cache invalidation are been managed by the DAX itself. DAX also works with the existing DynamoDB API calls so that the developer without making any changes to the application logic can still use DAX. Amazon DynamoDB is a memory cache for Dynamo Database Service that does all the heavy lifting behalf of the developers.



AWS Elastic Beanstalk adds support for .NET Core 2.1 on the Windows Server Platforms

Amazon Web Service has now added support for the newly released .NET Core 2.1 version. This version is supported in all of the Windows Server settings that support the .NET Core and it is available in all the corresponding regions. To see the complete list of the supported platform you can click here. .NET Core can publish a self-contained application even though if the Elastic Beanstalk environments haven’t been updated yet with the latest version of the .NET Core 2.1. You can direct the .NET Core publish mechanism to a self-contained publish and you can also include the .NET Core 2.1 runtime with the publish bundle. 

Tuesday, 26 June 2018

Amazon Cloud Directory adds Managed Schema for quick application development

You can now develop your application faster with the Managed Schema introduced by the Amazon Cloud Directory. With the managed schema there is no need to define and set up your own schema. You can use the managed schema to start creating and retrieving objects and also you can create a directory. There is two new Cloud Directory feature in Managed Schema which is a flexible data type and dynamic facet style called variant. You can easily create objects with any type of data using the dynamic facet or with any number of attributes. A variant data type is created when the attribute is created using a dynamic facet. You have to not pay any additional charges for using Cloud Directory Managed Schema. 

You can now enable IT Management Workflow automation in Amazon Macie with the new release of Administrative APIs

Now with the latest release of Administrative APIs, you can now enable IT Management Workflow Automation in Amazon Macie which is now part of the Amazon Web Service SDK. Now with the latest update, you can associate Amazon Simple Storage Service resources with the Amazon Macie. When you will create the Amazon S3 buckets you have to associate them programmatically with the Macie for ongoing data access monitoring, data discovery, and classification. Amazon Macie provides a dashboard that processes the credential data and personally identifiable information and features alerts when there is an unauthorized access. 

Amazon Relational Database Service for PostgreSQL now adds support for Multi-AZ Deployments


Amazon Relational Database Service in the latest announcement offers Multi-AZ deployments and Read Replicas so that the requirement for the production database can be supported with the disaster recovery requirements, available and scalability. Now Read Replicas in the Multi-AZ configuration has improved its abilities such as resilient Read Replicas, Simplified engine upgrade process, improved DR strategy and high availability. A standby Read Replica in the Multi AZ configuration delivers high availability for the Read Replica where many read heavy workloads including the OLTP and analytics serving the read request is treated as business critical therefore standby Read Replica in the Multi AZ is essential.

Monday, 25 June 2018

AWS has now teamed up with Autodesk to promote generative design by giving out free cloud credits

autodesk


Amazon Web Services and Autodesk have teamed to promote a new design technology called generative design. Generative Designer assists the designer to explore all types of designs for a product.

Generative design provisions different options for product designs that are based on the set of features such as manufacturing methods to be utilized, materials and the product’s weight. Using Machine Learning and Cloud for designing products can offer tons of different designs options.

Autodesk had launched generative design in April into its product development tool which is called Autodesk Fusion 360 Ultimate. Now Autodesk has made a strategic plan for encouraging the adoption of its technology by teaming up with AWS and giving away free cloud credits to 1000 new subscribers to Fusion 360 ultimate.

Autodesk customers with their Fusion 360 Ultimate subscription already get 1000 cloud credits so that it can be used to run generative design studies in the AWS Cloud. The new customers will get an additional 500 credits because of the promotion which soon kicks off from July 1.


The Autodesk company in its blog post said that “Generative design isn’t just transformative for the product development process but it is a scalable technology with a possibility to shake the manufacturing industry for the better.” Autodesk Generative design offers engineer and designers different options with a view in providing full design exploration solution. 

Amazon Kinesis Video Streams Producer SDK is now available as Docker Images and Gstreamer Plugin

The Amazon Kinesis Video Streams producer SDK is now available as a Docker Images for Raspberry Pi, Ubuntu, and Mac Operating System and as Gstreamer-plugin to help the customers to stream video into Amazon Web Services in minutes. With Amazon Kinesis Video Streams you can securely stream video from the connected devices such as Tablets, Mobile phone, Laptops and etc to AWS for real time Batch oriented processing, analytics, machine learning, and storage. Amazon Kinesis Video Streams scales and offers the infrastructure required to store the streaming video data in real time from million of different devices. Gstreamer is an open source framework that offers a standard environment for building media data flows from the device for storage, processing, and rendering. 

The New Quick Start allows you to deploy Jupiter on AWS Cloud

You can now Deploy Jupiter on Amazon Web Service Cloud with the new Quick Start in just about one hour. You can deploy Jupiter is a component of Cognizant’s adaptive data foundation offering and Cognizant’s data testing accelerator on the AWS Cloud. In the AWS Cloud you can now operate and automate the quality assurance tests on the data stored in Amazon S3, Apache Hive on Amazon Elastic MapReduce and Amazon Redshift. After deploying the Jupiter on the AWS Cloud you can now configure the databases and the sources for Jupiter to test, view the dashboard with the details of the most recent test run, integrate Jupiter with defect management tools like the HP ALM or Jira to enable traceability, configure the source code management repositories for deploying the test scripts and try a group of sample test projects with sample datasets for Apache Hive and Amazon S3.

Amazon DynamoDB has now added Service Level Agreement for Global Table with 99.99% Service Commitment

Amazon Dynamo Database adds support for Service Level Agreement for Global Tables with 99.99% service commitment. Amazon Web Services had announced the release of Dynamo Database Service level agreement that offers a stronger availability commitment with no scheduled downtime. Amazon DynamoDB is a nonrelational database service that offers flexibility and quick speed for the applications that require consistent, single-digit millisecond latency at any scale. AWS will now make AWS DynamoDB available for each AWS Region with commercially reasonable efforts and during the monthly billing cycle with 99.99% Service Commitment and as described on the Amazon Dynamo Database SLA. When the entire DynamoDB table in the applicable Amazon Web Services Regions is part of the Global Tables then the availability will be at 99.999 percent. 

Friday, 22 June 2018

Amazon Alexa and AWS reach space to help NASA get their work done quickly and efficiently

NASA


Astonishing as how it can sound, Alexa has now reached the stars by helping the USA space agency in keeping their daily task organized while processing their underlying data sets.

Tom Soderstrom, CTO and Innovation Officer at NASA’s Jet Propulsion Laboratory says that voice as a platform is the next big future once they learn to talk to the chatbots and digital assistants as how we do normally with fellow beings or colleagues.

Soderstrom at the AWS Public Sector Summit said that if anyone has an Alexa controlled Amazon Echo smart speaker at home and commands it to enable the ‘NASA Mars application. Alexa will give all the right answers if asked about the Red Planet.

On the inaugural of the “Earth and Space Day,” Soderstrom said that the serverless computing comes in the picture where we no longer need to build it for scaling but for real-life work cases and get the result in cost-effective means without compromising the quality. He also added that voice as a platform offers 10 times faster result than compared to the written dialogue.

With Serverless Computing there is no need for scaling, managing and provisioning any servers and yet build and run applications efficiently. AWS is the cloud computing market leader that offers a collection of fully managed service to the client in the public sector that allows the clients and customers to focus more on the innovation of their products.

amazon echo


Alexa assists JPL employees to scan through 4 Lacs sub-contracts and provide the requested copy of the contract from the data set in seconds. Alexa is a virtual helpdesk that delivers information when it’s commanded without knowing where the data is stored or what passwords are to be accessed to get the data. But the only challenge that arises is to find out ways to better communicate with the Chatbots and Digital Assistants so that to aim in making voice as the powerful medium.

Pasadena is a California based Jet Propulsion Laboratory is a research and development center which is managed by the California Institute of Technology for NASA that performs key robotic Earth and Space Science missions. It has nearly 6000 employees.

Soderstrom said that there are six technology waves that will initially force the developer to build a better solution before they affect the common people. The above said technological  waves are as follows;

“New Habits”- Gaming, Always connected Workplace

“Implement Artificial Intelligence” - Machine Learning, Automation, Analytics and chatbots

“Ever present Computing” – Augmented Reality, Smart devices, Mobile, IoT

"Cyber Security challenges” – BlockChain

"Software-Defined Everything" – DevOps, APIs, Networks and Open Source, etc.

"Accelerated Computing" – Serverless Edge Computing

Tom Soderstrom said that it is essential to make the right combination with the technologies stated above and move forward with the development. Cloud Computing has become the foundation for these technologies to function. The Jet Propulsion Laboratory is integrated with Alexa and IoT sensors that have helped to solve queries quickly. He also said that Artificial Intelligence will not steal any job but instead, it will help the employees to carry out work more efficiently.

He added that Artificial Intelligence will alter the industries norms ranging from Retail to health care and auto and transformation and E-commerce. Industries that will not adopt the Artificial Intelligence will fall behind. Machines are 80 percent effective and even Humans are 80 percent effective and when you combine this then there will be 95 percent effectiveness.


Soderstrom warns that there will be a next technological tsunami in the form of built-in intelligence and every industry and sector has to be ready to handle that kind of wave. 

Amazon WorkDocs is now collaborated with Hancom Thinkfree office Online which allow you to co-author Microsoft Office Files

Microsoft Office Files can be co-authored and it can be created in real time in the Amazon Workdocs web application by utilizing the collaborative editing powered by the Hancom ThinkFree Office Online. Users now with this new collaborative association can now create presentations, worksheets and documents, share with the co-workers and the ability to make changes to files directly from the web browser. With the collaborative editing, the users can now work together and make changes in their contents in the Microsoft Office files without even any need for switching file formats or applications. 

Amazon Sumerian added more AWS regional availability

Amazon Sumerian has now made available in six new Amazon Web Services Regions that is Asia Pacific – Seoul, Singapore, Mumbai and Tokyo, North America – Montreal and EU – Paris. So with the latest development, the Amazon Sumerian is now available in 15 AWS Regions. You can click here to visit the AWS region table to get your hand on more information. Amazon Sumerian lets you create and publish Virtual Reality, Augmented Reality and 3D applications easily and promptly without any need of having a requirement for specialized programming or 3D graphics expertise. Amazon Sumerian offers a Hosts, editor, asset library, APIs and Integrated Development environment for deployment, development, and design of the scenes.

Amazon QuickSight adds support for more visual customizations and replacement of data sets for dashboards

Amazon QuickSight Authors with the latest update can now with just a single click replace the datasets in an analysis. This will enable the authors to publish the dashboard and create analyses by utilizing the test data sets from the sample databases or from spreadsheets and it can conveniently replace the datasets that are necessary. You can click here to find more details about this new update. Authors can now customize column, legend and axis title so to offer a breakthrough experience for the readers. AWS has now also added some additional options that include show/hide axis title, set min and max for the axis range and also can set increments on chart axes. 

Thursday, 21 June 2018

AWS Storage Gateway Service now adds Server Message Block support

The AWS Storage Gateway Service has now added support for the Server Message Block (SMB) protocol to File Gateway so that the files based application for Microsoft Windows can be easily accessed and store the objects in the Amazon Simple Storage Service. With the File Gateway, the applications can now store the files as the objects in the Amazon Simple Storage Service using Server Message Block version 2 and 3 as well as the network file system version 3 and 4.1. The file gateway Server Message Block file shares and objects can be controlled by using the corporate Active Directory domains or you can also use the authenticated guest access. 

AWS Elemental MediaLive adds API and Console based Reserved Pricing for Channel Outputs and Inputs

Now with the new update in AWS Elemental MediaLive, you can now select reserved pricing for outputs and inputs directly from the Amazon Web Services Management Console or through the AWS Elemental MediaLive API. This will make it convenient to choose the pricing model that will work best for the live channel. The Reserved pricing or On-Demand pricing with a 12-month commitment is now available. AWS Elemental MediaLive creates high-quality live video streams for broadcasting live video processing service. 

AWS Cost and Usage Report can be automatically refresh when the charges relating to previous month is detected

The AWS Cost and Usage Report includes a granular set of AWS cost and usage data available where it includes metadata about AWS Services, reservations, pricing and more. You can now customize the AWS Cost and Usage report to automatically refresh when the charges related to the previous months such as AWS support fees, refunds, and credits are detected. To use this feature you have to first access the reports page within the AWS Billing console and then enable refresh setting for the individual report and save the new report settings. Once it is enabled the data in the Amazon Web Services Cost and Usage report will automatically refresh when the charge is detected that relates to the previous month’s bill. 

Wednesday, 20 June 2018

Amazon Relational Database Service for Oracle adds new feature called Optimize CPUs

Amazon RDS for Oracle offers a new feature called Optimized CPUs. It offers two ways to maximize the value of the Oracle Database License which is first you can determine a custom number of cores when launching the new instance and second you can disable the Intel Hyper-Threading Technology. Optimize CPUs for Relational Database Service for Oracle now uses the newly announced Optimized CPUs for Amazon Elastic Cloud Compute. There are multiple ways to reduce the database cost with the recent introduction of X1 and X1e instances. 

Amazon AppStream 2.0 now adds support for configuring user's default application settings

Default application settings can be set for the users now with the new support added. This will include application browser settings, plugins and connection profiles. You can now set the default connection profiles for the SQL clients so that they can have always the same settings they require without any need to configure the applications. Default applications settings can be created for no additional charge in all Amazon Web Service Regions where the Amazon AppStream 2.0 is offered. With Amazon AppStream 2.0 there is a pay as you go pricing. 

Amazon Pinpoint now has Phone number validate feature to enhance the delivery rates of the SMS messages

Amazon Pinpoint with the latest update can use the Phone number validate feature so to improve the delivery rates of SMS message they send using the Amazon Pinpoint. Phone Number Validate solves errors that occur often when the end users insert their phone number on the web-based forms. Phone Number Validate features deliver essential metadata about the end users phone numbers. It can specify if the phone number is integrated with a Landline, VoIP or mobile. The customers can use this information to make sure that they use the right channel to deliver the messages to their users. 

Tuesday, 19 June 2018

Amazon introduces AWS Landing Zone

AWS Landing Zone helps in quickly setting up a secure multi-account AWS environment based on the Amazon Web Service Best practices. Setting up a multi-account environment takes ample amount of time that involves configuration of various accounts and services and deep understanding of the AWS Services. This solution will, in fact, help in saving time by automating the set up of the environment for running scalable and secure workloads where at the same applying a primary security baseline via the creation of resources and accounts. AWS Landing zone deploys AWS account vending machine product for automatically setting up new accounts. 

System Metrics in Amazon Connect gets new Contact Attributes

Amazon Connect now offers more options to queue with the preferred agents to increase the agent utilization or to actively route calls to the queue with the shortest wait time for the customers, using the new contact for system metrics. Contact Attribute is a data about the customer interaction and can be referred when there is a contact flow so to customize or personalize the interaction experience. The contact attributes can help the user to decide where to route the call based on the queue thresholds that have been defined. 

AWS has announced new exam readiness courses for AWS certifications

Amazon Web Service announced that they are launching seven new exams readiness courses to help the candidates prepare for the AWS Certification. The courses are created by the AWS to help the candidates prepare for Advanced Networking, Solutions Architect, Big Data and DevOps Engineer. These courses will accompany the training courses offered by AWS and focus on validating the technical expertise with AWS Certification. They will teach how to interpret exams and implement concepts that are being tested by the exam and assign the study time accordingly. The candidates will get chance to work through sample questions to know the rationale behind the incorrect and correct answer choices. Training is developed by Amazon Web Service so the courses are about the latest best practices.  

Monday, 18 June 2018

AWS DeepLens - A Complete Guide

AWS DeepLens


AWS DeepLens is a Video camera looking device that applies the neural networks to learn and make predictions via computer vision projects, real-world, hands-on exploration with a physical device and tutorials. This device performs like an intelligent device that will operate deep learning algorithms on captured images in real-time.

The difference between the other Artificial Learning powered camera and the Deep Lens is the features and capabilities that don't require sending video frames to the cloud and making it possible to run machine learning inference models locally. This device shapes up the theories and hypotheses on edge computing as to how advanced it can be with further development. 

Now even though if the DeepLens device looks like a video camera but in reality, it works more like a PC and less like a camera. It’s a powerful computer that has an attached camera which is much more advanced compared to the average webcam. 

Specification:

AWS DeepLens is installed with an Intel Atom X5 Processor that comes with four threads and four cores. It comes with 16 GB Storage and 8 GB RAM computing resource which is enough to run the machine learning algorithms. DeepLens is embedded with GPU in the form of Intel Gen9 Graphics Engine which may not be the best hardware but it is enough to run local Machine Learning inferences. 

The device operates Ubuntu 16.04 LTS, AWS Greengrass Core and optimized version of Intel cLDNN libraries and it also can be connected to a mouse, HDMI Display, and standard keyboard. You can activate the terminal window and run the device like any other Linux Machine. The camera is 4-megapixel webcam with a 1080 resolution with a 2D microphone array. 

It is designed for developers to run models with TensorFlow and Caffe in less than 10 minute start-up time. Basically, the device is been build so that to put forth the machine learning concept into the field to make it familiar with the developers and general users. 

Benefits of AWS DeepLens:

Custom built:

With keeping Deep learning in mind the DeepLens is been designed with over 100 GFLOPS of compute power on the device with the ability to process deep learning prediction in real time on HD Video. 

Fully programmable:

AWS DeepLens is fully programmable using the AWS Lambda functions and is easily customizable. The Deep Learning models in the DeepLens run as part of the AWS Lambda functions that deliver a familiar programming environment to experiment with. 

A new approach to learn machine learning:

AWS DeepLens enables developers of all skill levels to work on deep learning with sample projects with practical and hands-on the example that can start running with just a single click. 

Integrated with Amazon SageMaker:

From the AWS Management Console, the models trained in the Amazon SageMaker can be transferred to the AWS DeepLens with just a few clicks. 

Broad Framework support:

AWS Developers can operate any Deep Learning foundation including the TensorFlow and Caffe. AWS DeepLens use Apache MXNet to deliver a high performance, optimized and efficient inference engine. 

AWS integrated platform:

AWS DeepLens is integrated with Amazon Recognition for advanced image analysis; Amazon Polly to build speech-enabled projects and Amazon SageMaker for training models. The device is also connected with Amazon Simple Notification Service, Amazon Simple Storage Service, AWS Internet of Things, Amazon DynamoDB, Amazon SQS and more. 

You can build a lot of skills using AWS DeepLens but as of now, there are collections of projects that are created by the developer community so that you can use it for inspiration to design different or similar models. 

Artificial Learning


Some of the things that you can do with AWS DeepLens:

  • You can accurately recognize and detect objects.
  • Classify food whether specifying that either it is a hot dog or not a hot dog. 
  • Using DeepLens Identify whether a cat or dog.
  • You can transfer the style from one image to an entire video sequence recorded by the DeepLens in real time.
  • With DeepLens you can identify more than 30 actions such as playing guitar, dancing, brushing teeth or applying lipstick. 
  • You can detect the faces of people.
  • You can detect 9 different head movement orientation. 


The above-mentioned skills only are few chosen but there are more collections of DeepLens projects that are created by the developer community. You can check them out and get inspired to build a new skill.

 AWS DeepLens is now available for $249. 


Amazon ElastiCache for Redis now adds support for Redis 4.0.10

Amazon Elastic Cache for Redis has announced that they have added support for Redis 4.0.10. You can now leverage from the better memory management capabilities new caching improvements in the Redis 4.0 and enhance the performance and the memory usage of the in-memory data processing workloads. Redis 4.0 supports a frequency based least frequently used eviction policy for removing keys in addition to the time based Least recently used policy. Redis 4.0 introduces MEMORY that offers better insights on the Redis memory usage. The Memory Doctor command offers remedy information for the memory related issues and the memory usage for the command offers an accurate memory usage for a key and its value. The Redis 4.0 can defragment memory online by enabling more efficient memory usage and move available memory for the data. 

AWS DeepLens supports TensorFlow and Caffe, Expanded MXNet layer support and much more

AWS has added a lot of support for AWS DeepLens with new capabilities such as DeepLens that is optimized for the TensorFlow and Caffe Frameworks. Expanded MXNet Layer Support for Deconvolution, LRN, and L2Normalization. The Video stream from the DeepLens camera can be now utilized with the Amazon Kinesis Video Stream. AWS has also added a new sample project for head pose detection. It will use a deep learning model that is built with the TensorFlow framework to precisely detect the movement of the person’s head. You can also now view the output of the projects over the browser while being on the same network. 

Amazon GuardDuty upgraded the AWS CloudTrail Analysis which will further reduce the cost to the customers

Amazon GuardDuty has now improved the AWS CloudTrail Log analysis, so by doing this, the cost will be reduced to the customers. Cost reductions will differ by a customer that is based on the volume of the AWS CloudTrail logs. This means that the customers with high volumes of global Cloud Trail events will have the greatest net positive impact. AWS CloudTrial records an inclusive log of changes that take place in the AWS Accounts. Amazon GuardDuty analyzes the data anomaly detection and machine learning to identify unauthorized and unusual activity like unauthorized access to the accounts, cryptocurrency mining and unusual infrastructure deployments. Amazon GuardDuty will then notify you possible malicious activity affecting the security of the AWS resources.

Friday, 15 June 2018

Amazon Elastic Map Reduce version 5.14.0 now adds support for JupyterHub

JupyterHub can be now used on Amazon Elastic Map Reduce with EMR release 5.14.0. JupyterHub is a multi-user Jupyter notebook server that offers each user their own Jupyter notebook interface. It enables various users to all at once use their Jupyter notebook to perform exploratory data analysis and create and execute code. JupyterHub on Elastic Map Reduce is associated with the Spark Framework that allows the users to perform interactive Spark queries on Elastic Map Reduce cluster using Spark SQL Kernels, Spark R, PySpark and Scala. Python jobs can be run locally and can be taken advantage of the popular data science libraries that are pre-installed in the notebook. 

AWS CloudTrial Event History now adds all management events

AWS CloudTrial Event History will log all read and write management events automatically for supported Amazon Web Services. The event history allows you to filter, download and view the recent AWS account activity. This release will enable you to receive additional visibility to the account actions which is taken place over the 990 days without setting up a trial. Event history can be used to view all the management events for any Amazon Web Services that associates with CloudTrail. In addition to viewing delete events, updates and create events you can also now view read events such as describe and list. 

Amazon GameLift adds a new game session placement metics

Amazon GameLift now offers 18 new metrics that will give the users a deeper insight into what is happening with the Queue placements. The new metrics will track information as to how frequently the lowest latency regions and lowest priced fleets are selected by the Amazon GameLift Queues. With the data extracted from the new metric will deliver insights that can help in optimizing the lower latency for players on a global sale and decrease the server hosting costs. 

Thursday, 14 June 2018

AWS Deep Learning AMIs now adds Horovod for a quick Multi-GPU Tensor Flow training on Amazon EC2 P3 instances

The AWS Deep Learning Amazon Machine Image for Amazon Linux and Ubuntu now come with fully configured and pre-installed with Horovod which is a popular open source distributed training framework to scale the TensorFlow training on multiple GPUs. Horovod utilizes the Message Passing Interface model which is a standard for managing the communication and passing the messages in high performance distributed computing environments. Now the latest AWS Deep Learning Amazon Machine Image is now available on the AWS Marketplace. 

With the new Quick Start you can deploy Aviatrix User Virtual Private Network on the AWS Cloud

The new Quick Start can build a highly available user Virtual Private Network solution on the AWS Cloud in just 10 – 15 mins. The new Quick Start deploys the Log Analytics, Authentication services, Aviatrix gateways and Aviatrix Controller. The Aviatrix User Virtual Private Network allows the users to connect directly to the AWS Cloud workloads from remote locations. You can now assign each Virtual Private Network user to a profile that gives access privileges to a multi-cloud host, ports, protocols, and network. 

Amazon WorkDocs adds Open with Office Online feature

Microsoft Office files stored on Amazon WorkDocs can be opened and edited directly from the WorkDocs Web application with the new Open with Office Online feature. With the latest feature, it enables users to edit the Microsoft Office files from the Amazon WorkDocs web application using the familiar Microsoft application including Microsoft PowerPoint Online, Word Online, and Excel Online. Users can now incorporate desired changes, review feedback and collaboratively edit the Microsoft Office files without switching applications. Amazon WorkDocs delivers automatic version control to help the users to track updates to their content. There is no additional applied for this latest Open with Office Online feature for those who have valid Microsoft Office 365 licenses. 

Wednesday, 13 June 2018

Here’s how you can secure your AWS Storage Buckets

aws S3 bucket


Amazon Web Services has been a great venture for businesses who have data to store. The AWS Simple Storage Service buckets are inexpensive and can scale easily and can spin up and down quickly and it is backed up and secure by the Amazon which will make it easy to manage. 

With the ease that comes with Management and Deployment can be favorable in some situation and in some it can be not. If the buckets are set to public to access credentials there are high chances for the leaked data to become accessible to anyone in the world. 

Accenture accidentally enabled public access to a database that was containing 40000 passwords and other client credentials stored in the Amazon S3 buckets. There were even other companies such as Dow Jones, INSCOM, and Verizon who left their buckets open to the public. Uber had stored personal information of 57 million users on Amazon S3 where the hackers got in and uber had to pay them to delete the data and silence about the breach. 

RedLock a Cloud Security company found 250 organizations leaking credentials to the AWS Cloud Environments. According to the report generated by RedLock, it is been reported that 53 percent of the organizations that have used the cloud storage services must have unexpectedly revealed at least one service to the public. 

Amazon S3 an Easy, Secure and Ubiquitous Storage Service:

Greg Arnette, Director of Data protection platform strategy at Barracuda Networks said that Amazon Simple Storage Service is reliable and inexpensive cloud storage. An organization using the S3 storage service for data storage hasn’t witnessed any examples of data loss or corruption where now S3 is been referred as the 9th wonder of the world jokingly because of the popularity that it has been achieved all around the globe. 

Companies can opt for keeping their buckets for private use only so that only the approved users and the owners can view the data. The buckets can also be set public use so that anyone can access the data. So if any company keep the product pictures in the AWS bucket then it can be easily embedded in any website. There are instances that the users set their buckets to public access that contains private information but thinking that nobody will know the exact address of the bucket so they won’t be able to access it. 

That is a great misunderstanding that needs to be clarified!

Hackers frequently scan for AWS Simple Storage Service buckets looking for data that can be exploited. Customers don’t realize that it is a shared responsibility of keeping the data secure with its Cloud provider. Amazon said that they are responsible for the security of the cloud and the customers are responsible for the security in the cloud. Enterprises should know the overall functioning of the S3 storage by performing quality assurance on policies and configurations, maintaining the access control list and auditing which user is authorized to access what. 

Managing the whole AWS account cannot be an easy task because there are a lot of things that have to be taken care of. There are many AWS managed service provider that will offer you insights as to how best you can use the AWS services. It is always better to let an AWS partner guide you through the AWS Cloud journey so that you don't face any pitfall. 

Companies can set the AWS Identity Access Manager to solve this problem where they have the top down policy to lock down all the buckets by default and can make exceptions when they want the buckets to be public. The company that multiple AWS accounts can use AWS organizations to inculcate central management console. AWS Guard duty can be used to analyze the S3 bucket permission and get notified whenever the bucket is set to go public.  AWS CloudTrail can be used for governance, risk auditing, compliance and operational auditing. 

With the latest events relating to the issue of the S3 buckets, Amazon has taken steps to make security easier. Now the S3 buckets are made private by default and for the buckets that are public has an icon with a bright orange icon. And if someone is changing the bucket from private to the public then a warning message gets lighted up that “We highly recommend that you never grant any kind of public access to the S3 bucket”.  Amazon also announced that all the buckets are encrypted by default. 

PS: If you want a guide as to how to create an AWS S3 bucket then you can click here.

Networking & Connectivity in Hybrid Cloud: Connecting On-Premises with AWS

Hybrid cloud adoption is on the rise as organizations aim to balance on-premises control with the scalability of public clouds like AWS. A c...