Tuesday, 30 April 2019

Now DynamoDBMapper Offers Assistance For Amazon DynamoDB Transactional API Calls

The AWS SDK for Java offers a DynamoDBMapper class, permitting you to link your client-side classes to DynamoDB tables. To access DynamoDBMapper, you specify the relationship between items in a DynamoDB table and their related object instances in your code. The DynamoDBMapper class allows you to access your tables, executes different create, read, update and delete (CRUD) operations, and run queries. Now DynamoDBMapper gives assistance to Amazon DynamoDB transactional API calls, allowing developers who utilize DynamoDBMapper to define their code when creation cooperated, all-or-nothing modifies to various items both within and across DynamoDB tables. Now, DynamoDBMapper can be utilized by developers to run transactions. The transactional APIs offer developers atomic, consistent, isolated, and durable (ACID) operations in DynamoDB so that they can retain data accuracy in applications extra simply. Developers can help workflows and business logic which need adding, updating, or deleting multiple items as a single, atomic operation with transactions. This new feature is accessible in every standard AWS Regions where DynamoDB is available. Pricing for transactions is depends on the sizes of the items in the transactions. To download the AWS SDK for Java with the current version of the DynamoDBMapper class, refer AWS SDK for Java.

Monday, 29 April 2019

Advanced Parameters Launched By AWS Systems Manager Parameter Store

AWS Systems Manager Parameter Store gives secure, hierarchical storage for configuration data management and secrets management. You can save data like passwords, database strings, and license codes as parameter values. Also can save values as plain text or encrypted data. Then you can reference values with the help of unique name which you defined when you produced the parameter. Highly scalable, available, and durable, Parameter Store is backed by the AWS Cloud. Now, AWS Systems Manager Parameter Store launched advanced parameters which offer 3 improved potentials. Advanced parameters allow you to generate more than 10,000 parameters, utilize larger parameter value size (up to 8 KB) and attach policies to your parameter that can be use to save parameters with long values like certificates with long key chains. Besides, it allows you to configure policies like expiration, expiration notification and no-change notification. Expiration policy gives the potential to define an expiration date and time. Expiration notification policy aids you trail parameters which are going to be end shortly. No-change notification policy aids you trace parameters which are not altered for a defined period of time. Advanced parameters are charged per parameter per month, and per API interaction, refer pricing page for details. This new feature is obtainable in every commercial regions and AWS GovCloud (US). To know further about AWS Systems Manager Parameter Store, refer product page.

Friday, 26 April 2019

Tag Updating Introduced By AWS Service Catalog

AWS Service Catalog permits organizations to build and control catalogs of IT services which are sanctioned for use on AWS. These IT services can contain all from virtual machine images, servers, software, and databases to finalize multi-tier application architectures. AWS Service Catalog lets you to centrally control frequently deployed IT services, and aids you get uniform governance and meet your compliance needs, while allowing users to rapidly deploy exclusively the authorized IT services they require. Now, AWS Service Catalog allows updating of tags on provisioned products and related resources to make certain that your existing tagging taxonomy is used to provisioned resources. Administrators can allow tag updating on provisioned products through a Resource Update constraint, and holders can then update their provisioned products and modify tags. Tag-only updates will not influence working resources. To know further about Service Catalog Tag Update administration, refer AWS Services Catalog Administrator Guide and to know how end users can retain tags, refer User Guide.

Thursday, 25 April 2019

Cheapest Solution On FTP Over AWS Cloud

Businesses are always helped by the evolving software tools in respect to the increasing demands of the market. Apart from secure working mechanism, it would be the cherry on the top if the cost implications for such softwares are cheap enough. In this post we will go through one of such files upload/download tool/service provided by AWS which is secure and cheapest as compare to other cloud technologies.

FTP (File Transfer Protocol) is a fast and handy way to transfer small/large files over the Internet. At some point, we may have configured an FTP server backed up by block storage, NAS, or a SAN. However, involving this kind of backend storage options requires infrastructure support and can also cost a fair amount of time and money.

Why S3 FTP?
Amazon S3 service is reliable and have user friendly interface. Amazon S3 features during the last edition of re:Invent.

  • Amazon S3 offers an infrastructure that’s “designed for durability of 99.999999999% of objects.”
  • Amazon S3 is designed to provide “99.99% availability of objects over a year.”
  • You pay for exactly what you need with no minimum commitments or up-front fees.
  • With Amazon S3, we can store unlimited data you can store or when you can access it.

S3 FTP : Implementation

1. Using S3 : Object Storage As Filesystem :
Create a S3 bucket that will be used as filesystem, which can be done by AWS console or API.

2. IAM Policy And Role :
Create an IAM Policy and Role to control access into the previously created S3 bucket which also can be done by AWS console or API.

3. FTP Server :
Launch a EC2 instance that will be used for hosting FTP service.

4. Setting Up S3FS On FTP Server :
We will configure S3FS on the FTP server in order to mount the S3 bucket as file system. Here we can follow the below steps to configure the same.

Step-1 :- If you are using a new centos or ubuntu instance. Update the system

  • For CentOS or Red Hat
  •     # yum update all
  • For Ubuntu
  •     # apt-get update

Step-2 :- Install the dependencies.

  • For CentOS or Red Hat
  •     # sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel
  • For Ubuntu
  •      # sudo apt-get install automake autotools-dev fuse g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config

Step-3 :- Clone s3fs source code from git.


Step-4 :- Now navigate to source code  directory, and compile and install the code with the following commands:

  • cd s3fs-fuse
  • ./autogen.sh
  • ./configure --prefix=/usr --with-openssl
  • make
  • sudo make install
Step-5 :- Use below command to check where s3fs command is placed in O.S. It will also tell you the installation is ok.

  • which s3fs

5. Configure FTP User Account And Home Directory :
Create a ftptest user account which we will use to authenticate against our FTP service:

  • sudo adduser ftptest 
  • sudo passwd ftptest

Now create the directory structure for the ftptest user account which we will later configure within our FTP service, and for which will be mounted to using the s3fs:

  • sudo mkdir /home/ftptest/ftp
  • sudo chown nfsnobody:nfsnobody /home/ftptest/ftp
  • sudo chmod a-w /home/ftptest/ftp
  • sudo mkdir /home/ftptest/ftp/files
  • sudo chown ftptest:ftptest /home/ftptest/ftp/files

6. Install And Configure vsftpd Over The Server :
Now install and configure our FTP service with the vsftpd package:

  • sudo yum -y install vsftpd

7. Startup S3FS and Mount Directory :
We will configure S3FS to mount the S3 bucket using below commands:

  1. Gather IAM credentials for required S3 bucket access / full S3 access.
  2. Create a  file in /etc with the name passwd-s3fs and paste the access key and secret key in the below format.
    • vi /etc/passwd-s3fs
    •        # Your_accesskey:Your_secretkey
  3. Change the permission of file.
    • sudo chmod 640 /etc/passwd-s3fs
  4. Mount the bucket on the directory created in Step 5
    • sudo s3fs your_bucketname -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /home/ftptest/ftp/files
    • vi /etc/rc.local
    • sudo s3fs your_bucketname -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /home/ftptest/ftp/files

Atlast we can test if the S3 bucket is mounted successfully on the desired folder over the server or not.

  •  df -h

Also we can connect and test all the setup through filezilla or any other tool for uploading/downloading the files to AWS S3 using S3FS.
 

With this blog description we have witnessed – how we can leverage the S3FS together with both S3 and FTP to build a file transfer solution! If you have any queries then write us at support@cloud.in

Wednesday, 24 April 2019

AWS Glue Is Obtainable In AWS GovCloud (US-East) Region

AWS Glue is an entirely organized ETL service – Extract, Transform, and Load which performs it simpler for customers to arrange and load their data for analytics. You can build and execute an ETL job with a few clicks in the AWS Management Console. You directly point AWS Glue to your data saved on AWS. Besides, AWS Glue finds your data and saves the related metadata (e.g. table definition and schema) in the AWS Glue Data Catalog. Once recorded, your data is instantly searchable, queryable, and available for ETL. AWS Glue crawls your data sources, recognizes data formats, and proposes schemas and transformations. AWS Glue automatically creates the code to run your data transformations and loading processes. AWS Glue is serverless, so there is no infrastructure to provision or manage. AWS Glue executes your ETL jobs on a completely managed Apache Spark environment. Now, you can utilize AWS Glue in the AWS GovCloud (US-East) Region. To get the entire list where AWS Glue is accessible, click AWS Region Table

Monday, 22 April 2019

Amazon Aurora Serverless Provides Sharing and Cross-Region Copying of Snapshots

Amazon Aurora (Aurora) is an entirely organized relational database engine that's compatible with MySQL and PostgreSQL. MySQL and PostgreSQL unites the speed and reliability of high-end commercial databases with the lucidity and cost-efficiency of open-source databases. The code, tools, and applications you utilize with your current MySQL and PostgreSQL databases can be used with Aurora. Now sharing snapshots of Aurora Serverless DB clusters with other AWS accounts or publicly is possible. Besides, AWS providing the potential to replicate Aurora Serverless DB cluster snapshots over AWS regions. Authorized AWS accounts can restore a DB cluster straight from the shared DB cluster snapshot without replicating it. DB cluster snapshots could be divided for a diversity of scenarios, for example, when you use disparate AWS accounts for detachment of development and production environments, share data with partners, or associate publicly on research projects. Cloning DB cluster snapshots to other AWS region enables you to retain a replica of your data for disaster recovery or for database migration. To know how to share or copy your DB cluster snapshots, visit AWS management console or use AWS SDK or CLI. And to get more information about Amazon Aurora, refer documentation

Saturday, 20 April 2019

AWS SMS Supports For Migrating VMs Operating in Microsoft Azure

AWS Server Migration Service (SMS) is an agentless service which makes it simpler and swifter for you to move thousands of on-premises workloads to AWS. AWS SMS permits you to automate, schedule, and track incremental replications of live server volumes, making it simpler for you to collaborate extensive server migrations. Now AWS Server Migration Service (SMS) gives assistance for relocating virtual machines (VMs) working in Microsoft Azure to the AWS cloud. This new potential makes it simple for you to move current applications working in Microsoft Azure to the AWS cloud to use benefit of greater reliability, faster performance, more security capabilities, and lower costs. Previously, customers could move VMs working in VMware vSphere and Microsoft Hyper-V environments. Now, customers can utilize the clarity and facilitate of Server Migration Service to move VMs working in Microsoft Azure. You can discover Azure VMs, sort them into applications, and relocate the application group in one unit avoiding the difficulty of cooperating the copy of discrete servers or separating application dependencies. AWS SMS notably lessens the time to move applications and decreases the risk of errors in the moving process. To know further, click AWS SMS and for technical information, click documentation.

Friday, 19 April 2019

New Amazon Aurora Serverless, GraphQL, and OAuth Potentials Introduced By Amplify Framework

The Amplify Framework is an open-source project for creating cloud-enabled applications. Now, Amplify Framework covers assistance for adding Amazon Aurora Serverless as a data source for your AWS AppSync GraphQL APIs when creating mobile and web applications. This allows developers to utilize the Amplify CLI which a part of the Amplify Framework, to produce a GraphQL API with auto-generated schema and resolvers that operates with a current Aurora Serverless database. Earlier, developers had to setup an AWS Lambda function to use Aurora Serverless as a data source for a GraphQL API. The GraphQL Transform library, added in the Amplify CLI, gives an easy formula which aids developers swiftly design scalable web and mobile backends on AWS. This features includes to the GraphQL Transform Library that allows developer to offer fine-grained access control over their APIs by configuring permission rules for top level and discrete fields. Besides, developers can configure access to linked fields within a model, or those that represent relationships between data. Also, this new feature let developers utilizing the Amplify JavaScript library to trigger OAuth flows in their web applications with one line code. To get further details on AWS Amplify, refer documentation and related blog.

Thursday, 18 April 2019

How To Set Up Your First Instance With Amazon Lightsail

Amazon Web Services (AWS) launched a new service called Amazon Lightsail in 2016. Amazon Lightsail is a flat-rate, low-cost computing solution with easy setup and it has low maintenance. In the server hosting world, these systems are known as VPS i.e Virtual Private Servers.

AWS Lightsail VPS is a scaled-down version of its Elastic Compute Cloud (EC2) service. For production workloads, EC2 instances require a range of fine-tuning like selecting the right Amazon Machine Image (AMI), placement in a virtual private cloud (VPC), assigning security groups, configuring network interfaces, and so on.

Amazon Lightsail instances don’t need any of these. This ease of set up and operation means that Lightsail instances don’t need to be maintained by dedicated server teams. They are classical for developers, devotee, and small teams.

Step 1 : Sign-Up For AWS
To launch your first lightsail instance you requires an AWS account. Sign up for AWS, or Sign in to AWS if you already have an account.

Step 2 : After Login In AWS Management Console, Select Amazon Lightsail
Amazon Lightsail is a low cost VPS(virtual private server) service launched by Amazon Web Services(AWS). Launching an instance become very easy for non-technical people. With Lightsail you can launch a virtual machine preconfigured with SSD-based storage, DNS management, and a static IP address with a few simple clicks.






Step 3 : Click On The Create Instance Button



Step 4 : Choose The AWS Region And Availability Zone For Your Instance



Step 5 : Choose Your Platform And OS Which You Want To Install For Instance



Step 6 : Create New SSH Key
If you don't choose to use the default key, you can create a new key pair at the time you create your Lightsail instance.

  1. If you haven't done it yet, choose Create instance.
  2. On the Create an instance page, choose change SSH key pair.
  3. Choose Create new.
  4. Lightsail displays the region where we're creating the new key.


Step 6 : Choose An Instance Plan
A plan includes a low, predictable cost, machine configuration (RAM, SSD, vCPU), and data transfer allowance. You can try the $3.50 USD Lightsail plan without charge for one month (up to 750 hours). AWS credits one free month to your account.



Step 7 : Identify Your Instance
You can rename your instance to something more appropriate, and at last click on create.



Step 8 : Create A Lightsail Static IP Address And Attach It To Your Instance
The default public IP for your Lightsail instance changes if you stop and start the instance. A static IP address, attached to an instance, stays the same even if you stop and start your instance.

Create a static IP address and attach it to your Lightsail instance.



With reference to this article, you must be able to launch your first Lightsail instance. If you have any queries then write us at support@cloud.in

Wednesday, 17 April 2019

Now Resource Tagging Is Supported By Amazon FreeRTOS

Amazon FreeRTOS (a:FreeRTOS) is an OS for microcontrollers that makes small, low-power edge devices simple to program, deploy, secure, connect, and manage. Amazon FreeRTOS expands the FreeRTOS kernel, a leading open source OS for microcontrollers, with software libraries which make it simple to safely join your small, low-power devices to AWS cloud services like AWS IoT Core or to more strong edge devices executing AWS IoT Greengrass. Now you can give tags to Amazon FreeRTOS Over-The-Air (OTA) update and customer configuration resources. This aids to find and handle access control of these resource depends on resource tagging. This assistance creates on earlier-released support for resource tagging in AWS IoT Core. Resource tagging gives multiple advantages. Tagging brings asset discoverability where a resource query can be related to an allocated tag, and you can find tagged resources. Besides, tag-based resources offer access control. To know further about how to tag IoT resources, refer Tagging Your AWS IoT Resources. See AWS Tagging Strategies for general best practices for using tags with AWS resources.

Tuesday, 16 April 2019

New Storage and Host Metrics Added By Amazon RDS Enhanced Monitoring

Enhanced Monitoring metrics are saved in the CloudWatch Logs for 30 days by default. To alter the amount of time the metrics are saved in the CloudWatch Logs, modify the retention for the RDSOSMetrics log group in the CloudWatch console. Amazon Relational Database Service (RDS) Enhanced Monitoring, which gives visibility into the health of your Amazon RDS instances, describes physical storage device metrics and secondary instance host metrics. When the Amazon RDS storage is using more than one physical device, Enhanced Monitoring gathers the data for every device. Besides, when the DB instance is functioning in a Multi-AZ configuration, the data for every device on secondary host is gathered also secondary host metrics. Physical device and Multi-AZ secondary host metrics both are accessible on RDS for Oracle, PostgreSQL and MySQL. With data described on every physical device, you can observe how many physical devices make up their volumes, if I/O is balanced over physical devices, and view if latency is uniform over physical devices. You can smoothly integrate Enhanced Monitoring with third-party applications to observe your Amazon RDS DB instances. To get entire list of obtainable metrics and further information about this feature, refer Enhanced Monitoring documentation.

Monday, 15 April 2019

AWS Amplify Console Gives Assistance in Five More Regions

The AWS Amplify Console offers a Git-based workflow for deploying and hosting fullstack serverless web applications. A fullstack serverless app includes a backend built with cloud resources like GraphQL or REST APIs, file and data storage, and a frontend built with single page application frameworks like React, Angular, Vue, or Gatsby. The Amplify Console hastens your application issue cycle by giving an easy workflow to deploy full-stack serverless applications. Only you need to link your application’s code repository to Amplify Console, and modifications to your frontend and backend are deployed in one workflow on each code commit. Now, AWS Amplify Console is accessible in further new five AWS Regions, are Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Mumbai), and EU (Frankfurt). With this new regions expansion, the AWS Amplify Console Availability count becomes 11. The AWS Amplify Console is a completely organized hosting service, to know further on hosting your web application, refer Getting Started.

Saturday, 13 April 2019

Now Amazon QuickSight Offers Localization, Percentile Calculations And Further

Amazon QuickSight is a rapid, cloud-powered business intelligence service which makes it simple to provide insights to all in your company. As an entirely managed service, QuickSight allows you simply build and broadcast interactive dashboards which contains ML Insights. Dashboards can then be accessed from any device, and embedded into your applications, portals, and websites. Now, Amazon QuickSight is available in 10 upmost languages! These languages are English, German, Spanish, French, Italian, Portuguese, Japanese, Korean, Simplified Chinese, and Traditional Chinese. With assistance for these languages over the whole product, Amazon QuickSight now makes it simpler for all to get deeper insights from their data. Besides, Amazon QuickSight gives assistance for percentile calculations that allows you to produce 50th, 90th, 95th or nth percentile of any metric to simply foresee allotment of your data. Also, you can arrange your visuals to show a custom number of data points or groups before showing the "other" category. This feature is available for bar charts, combo charts, line charts, pie charts, heat maps, and tree maps. These updates are accessible in Standard and Enterprise Editions in each QuickSight regions - US East (N. Virginia and Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Singapore, Sydney and Tokyo).

Friday, 12 April 2019

Now Amazon Pinpoint Introduces an Analytics Dashboard

Amazon Pinpoint is an AWS service which you can use to occupy with your customers over different messaging channels. Whether you're a developer, marketer, or business user, you can utilize Amazon Pinpoint to deliver push notifications, emails, SMS text messages, and voice messages by creating a messaging campaign that campaign sends customized messages on a schedule that you specify. Now, Amazon Pinpoint is introducing a new dashboard for transactional SMS messages. This dashboard contains data regarding the number of SMS messages that you sent, the number of messages that were received, and your average delivery rate. Besides, it covers a section that breaks out your message deliveries by country. This data makes it simple to track how many messages you sent to each country or region, moreover the average cost that you spent on sending those messages. This data allows you to rearrange your estimates and delivery performance for transactional SMS messages. Amazon Pinpoint is accessible in multiple AWS Regions, refer AWS Regions and Endpoints. To get further information on new transactional SMS message dashboards, read Amazon Pinpoint User Guide.

Thursday, 11 April 2019

Application Load Balancer Vs. Classic Load Balancer

Elastic Load Balancing automatically divides incoming application traffic over different targets, like Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can manage the differing load of your application traffic in one Availability Zone or over different Availability Zones. Elastic Load Balancing offers three types of load balancers which all presents the high availability, automatic scaling, and robust security required to make your applications defect liberal.

Let's compare Application Load Balancers and Classic Load Balancers,


  • Classic Load Balancer



  • Application Load Balancer


1. Usage Pattern

  • A Classic Load Balancer is employed for simple load balancing of traffic across multiple EC2 instances.
  • Application Load Balancer is employed for microservices or container-based architectures where there is a need to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.

2. Supported Protocols

  • Classic Load Balancer work at layer 4 and supports HTTP, HTTPS, TCP, SSL
  • Application Load Balancer work at layer 7 and supports HTTP, HTTPS, HTTP/2, WebSockets

3. Supported Platforms

  • Classic Load Balancer supports both EC2-Classic and EC2-VPC
  • Application Load Balancer supports only EC2-VPC

4. Back-end Server Authentication
  • Back-end Server Authentication allows authentication of the instances. Load balancer communicates with associate instance provided that the general public key that the instance presents to the load balancer matches a public key within the authentication policy for the load balancer. Classic Load Balancer supports whereas Application Load Balancer doesn't support Back-end Server Authentication

5. Back-end Server Authentication
  • Back-end Server Authentication allows authentication of the instances. Load balancer communicates with associate instance provided that the general public key that the instance presents to the load balancer matches a public key within the authentication policy for the load balancer. Classic Load Balancer supports whereas Application Load Balancer doesn't support Back-end Server Authentication.

5.1 Cross-zone Load Balancing
  • Cross-zone Load equalization facilitate distribute incoming requests equally across all instances in its enabled AZs. By default, Load Balancer can equally distribute requests evenly across its enabled AZs, no matter the instances it hosts.
    1. Classic Load Balancer support Cross-zone load balancing, but for Classic it should be enabled.
    2. Application Load Balancer support Cross-zone load balancing but for ALB it is always enabled.

5.2 Health Checks
  • Both Classic & Application Load Balancer each support Health checks to manage if the instance is healthy or unhealthy.

5.3 CloudWatch Metrics
  • Both Classic & Application Load Balancer integrate with CloudWatch to produce metrics, with ALB providing additional metrics.

5.4 Access Logs
  • Access logs capture detailed information about requests sent to the load balancer. Every log contains information like the time the request was received, the client’s IP address, latencies, request ways, and server responses.
    1. Classic Load Balancer provide access logs.
    2. Application Load Balancer also provide access logs with providing additional attributes

6. Dynamic Ports
  • Classic load balancer does not support the dynamic port Mapping with ECS.
  • ALB support the dynamic port Mapping with ECS which permit two containers of a service to run on a one server on dynamic ports that ALB automatically detects and reconfigures itself.

7. Host-Based Routing & Path-Based Routing
  • Host-based Routing : Use host conditions to outline rules that forward requests to totally different target group supported the host name within the host header. This allows ALB to support multiple domains employing a single load balancer.
  • Path-based Routing : Use path conditions to outline rules that forward requests to completely different target group supported the universal resource locator within the request. Every path condition has one path pattern. If the universal resource locator (URL) in an exceedingly request matches the trail pattern in a listener rule specifically, the request is routed victimization that rule.
    1. Classic load balancer does not supports Host-based & Path-based routing.
    2. Application load balancer supports Host-based & Path-based routing.

8. Deletion Protection
  • Classic load balancer does not support deletion protection.
  • Load balancer could not be deleted if we enabled the deletion protection. ALB Support deletion protection.
 
If you have any queries related to Elastic Load Balancing or its types, then write us at sales@cloud.in 

Wednesday, 10 April 2019

Amazon RDS For Oracle Expands Storage Size Up To 64TiB

Amazon Relational Database Service (Amazon RDS) is a web service which aids to set up, operate, and scale a relational database in the cloud. Amazon RDS for Oracle makes it simple to utilize replication to increase availability and reliability for production workloads. You can use different editions of Oracle Database in few minutes with cost-effective and re-sizeable hardware capacity. Now, you can make Amazon RDS for Oracle database instances with up to 64 TiB of storage and delivered I/O performance of up to 80,000 IOPS. Also current database instances can be raised to 64 TiB storage without any downtime. The new storage limit is an expand from 32 TiB and is helped for Provisioned IOPS and General Purpose SSD storage types. Besides, Amazon RDS assists to boost performance of up to 80,000 IOPS, for instances using Provisioned IOPS SSD storage. This boost enables you to unite database shards into a single database instance, which will clarify your database manageability. To know further about storage, refer the Storage for Amazon RDS and to know regional availability, visit Amazon RDS for Oracle Pricing.

Friday, 5 April 2019

Now VoD Controls AWS Elemental MediaConvert QVBR Mode

Amazon Web Services (AWS) gives its users multiple techniques created to cost-effectively provide on-demand video content to universal audience. These techniques aid allow you to dynamically scale any fusion of video storage, processing, and delivery services on the AWS Cloud. AWS has updated Video on Demand on AWS, a solution that automatically delivers the AWS services required to design a scalable, distributed video-on-demand workflow. Now the solution controls AWS Elemental MediaConvert Quality-Defined Variable Bitrate (QVBR) encoding mode which make sure uniform, high-quality video converting with the smallest file size for any type of source video content. The encoder decides the precise number of bits to utilize for every part of the video to retain the video quality that you define with QVBR. The solution deploys AWS Lambda, Amazon S3, AWS Step Functions, AWS Elemental MediaConvert, Amazon DynamoDB, Amazon CloudWatch, Amazon SNS, and Amazon CloudFront. To know further about Video on Demand on AWS, refer solution webpage.

Thursday, 4 April 2019

Best Practices To Manage Apps On Amazon SNS

Amazon SNS is a service provided by AWS and also fully managed by AWS only. It is used for sending notifications to people, machines and devices. You may have configured billing alerts or Cloud watch alarm notifications on AWS accounts. Amazon SNS is responsible for delivery of notifications. This service consists of Topics and subscribers. One topic can have multiple subscribers and when a message publish for a topic then the message will receive by all the subscribers. Before this if we wanted to send certain messages to only a set of subscribers then we need to create separate topic for each set of subscribers.

Some Of The Key Features Of Amazon SNS :

  1. It is a method with cost efficiency and easy to push notifications to mobile users, email recipients or even other distributed platforms.
  2. Platforms supported for SNS service are with variety such as iOS, Android, Java, Python, PHP, NodeJS from AWS.
  3. You can get the delivery status information via AWS CloudWatch on success and failure rates for mobile sent messages as well as their deliveries to SMS, SQS, HTTP, and Lambda destinations.
  4. Use SNS as a message bus to send alarms, messages and notifications from AWS services such as CloudWatch, RDS and S3 to other AWS services such as SQS and Lambda.

Best Practices To Manage Apps On Amazon SNS :
  • The tokens are responsible for identification of device and particular app on respective device.
  • It is good to understand how to manage the applications you write for Amazon SNS. The tokens are managed by AWS on behalf of its clients. When a publisher publishes to a device, it must have the client’s credentials and the device token.
  • The tokens are responsible for identification of device and particular app on respective device. Publisher must be able to access the identification, so that publishers can publish the info on device.
  • When a user registers his device on Amazon SNS, the token number is recorded which is unique for the user.
  • Generated token is combined with some other data and same is used to publish info on the user's device and this token along with the additional data is called as "PlatformEndpoint".

SNS Supports Transport Protocols :
  • HTTP, HTTPS – Specify a Subscribers URL as part of the subscription registration; notifications will deliver through an HTTP POST to the specified URL.
  • Email-JSON, Email – Messages as email are sent to registered addresses. Email-JSON send notification as a JSON object, as Email sends text-based email.
  • SQS – We can specify an SQS queue as endpoint; SNS will enqueue a notification message to the specified queue.
  • SMS – Messages are sent to registered phone numbers as SMS text messages.

SNS Supported Endpoint :
  • Email Notifications : SNS provide the ability to send Email notifications.
  • SMS Notifications : SNS provides the ability to send and receive Short Message Service (SMS) notifications to SMS-enabled mobile phones and smartphones.
  • Mobile Push Notifications : SNS provides an ability to send push notification messages directly to the app on mobile devices. Push notification message sent to mobile endpoint can appear in mobile app as message alerts, badge updates, or even sound alert.

Supported Push Notification Services :
  • Amazon Device Messaging (ADM)
  • Apple Push Notification Service (APNS)
  • Google Cloud Messaging (GCM)
  • Windows Push Notification Service (WNS) for Windows 8+ and Windows Phone 8.1+
  • Microsoft Push Notification Service (MPNS) for Windows Phone 7+
  • Baidu Cloud Push for Android devices in China

Wednesday, 3 April 2019

Amazon DynamoDB Reduces The Cost Of Global Tables By Discontinuing Related Prices For DynamoDB Streams

Amazon DynamoDB global tables gives a completely organized solution for deploying a multi-region, multi-master database which offers fast, local, read and write performance for extremely scaled global applications without needing to create and maintain your own replication solution. When you design a global table, you define the AWS regions where you need the table to be accessible. DynamoDB executes each of the required activities to design alike tables in these regions, and propagate ongoing data changes to each of them. Amazon DynamoDB has lessen the cost of global tables by removing related costs for DynamoDB Streams. Earlier, cross-region replication executed by global tables incurred costs for streams. Now, you are not anymore charged for streams resources utilized by global tables for replicating modifications from one replica table to all other replicas. Yet you are charged for any streams utilization executed by your applications that read from a replica table’s stream. Global tables replicate your DynamoDB tables automatically over your choice of AWS Regions. To get more information about Global Tables section, refer DynamoDB Developer Guide.

Tuesday, 2 April 2019

New Python Script Announced To Begin With Amazon Elastic Inference

Amazon Elastic Inference (EI) enables you to join low-cost GPU-powered escalation to Amazon EC2 and Amazon SageMaker instances to lessen the cost of executing deep learning inference by up to 75%. Amazon EI gives assistance to TensorFlow, Apache MXNet, and ONNX models, with more frameworks coming soon. Now, Amazon EI is presented in four regions US East, US West, EU, and Asia Pacific. If for the first time you are utilizing Amazon EI, then setting up your environment to start instances to Amazon EI can require time. You will require to setup a number of dependencies containing AWS PrivateLink VPC Endpoints, AWS IAM policies, and security group rules before you can utilize Amazon EI accelerators. Now, AWS announced a new python script which makes it simple to start with Amazon Elastic Inference by producing the required resources which will aid you start Amazon EI accelerators in minutes. The script will make sure that all settings are accurately configured and instance is started with necessary authorizations to use Amazon EI. To get further information on how to use the Amazon EI python script, refer Launch EI accelerators in minutes with the Amazon Elastic Inference setup tool for EC2. You can download the amazonei_setup.py script from GitHub to your local machine and execute it from your terminal using command: $ python amazonei_setup.py

Monday, 1 April 2019

Introducing AWS Firewall Manager Assistance For AWS Shield Advanced

AWS Shield is a managed Distributed Denial of Service (DDoS) protection service which protects applications executing on AWS. There are two tiers of AWS Shield - Standard and Advanced. AWS Shield Standard offers always-on detection and automatic inline mitigations which reduces application downtime and latency, so there is no necessity to occupy AWS Support to advantage from DDoS protection. Shield Advanced gives further detection and mitigation against huge and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall. Now you can utilize AWS Firewall Manager to simply configure and control AWS Shield Advanced DDoS protection for every resources in your organization over several accounts. With this potential, clients can design Shield Advanced protection policies to automatically find current and new resources and always implement DDoS protection to every resources, or use Tags to describe a subset of resources. Besides, AWS Firewall Manager offers central visibility into threats detected by Shield Advanced over every applications in your organization. AWS Firewall Manager is accessible to every Shield Advanced clients without any extra cost. You only charge for AWS Shield Advanced and AWS Config resources generated by AWS Firewall Manager. To know more about AWS Firewall Manager, refer Firewall Manager documentation.

Turn Documents Into Answers With ContextQ and RAG

Imagine having tons of documents and needing answers fast. Instead of scrolling forever, what if you had a smart assistant that read everyth...