Friday, 27 January 2023

Disaster Recovery Strategies with AWS




Disaster Recovery (DR) refers to the process of "preparing for" or "recovering after" a disaster. In this article, we will attempt to describe the Disaster Recovery scenarios and alternatives available on the public cloud - specifically AWS. When creating fault-tolerant, highly available AWS solution designs, we must have a high-level grasp of these alternatives.


Let's first understand what is RTO and RPO with respect to Disaster recovery


RTO (Recovery Time Objective) - This represents the amount of time required to recover after a calamity (restoring a business process to its service level, as defined by the Operational Level Agreement).


For example, if a disaster strikes at 12:00PM (Noon) and the RTO is four (04) hours, the DR procedure should have the system back up and running by 4:00PM.


RTOs are frequently challenging since they involve the restoration of all IT functions. By automating as much as possible, your IT department can help to speed up the recovery process. The RTO may be more expensive than a granular RPO, and a demanding RTO includes your complete company infrastructure, not just data. The expense of achieving an RTO or RPO is proportional to your IT department's application and data priorities. IT ranks applications and data based on income and risk. If the data in an application is regulated, data loss from that app could result in substantial fines regardless of how frequently that app is used. 


RPO (Recovery Point Objective) - RPO is the maximum quantity of data that an organization may tolerate losing. 


For example, if a disaster strikes at 12:00 PM (Noon) and the RPO is one hour, the system should recover all data stored in the system prior to 11:00 AM. This indicates that the total data loss is only one hour, between 11:00 a.m. and 12:00 p.m. (Noon).



DR Scenarios - 


  1. Active - Active 

  2. Warm Standby

  3. Pilot Light

  4. Backup-Restore



Active-Active has the lowest RTO while Backup and Recovery has the highest RTO among these four scenarios.


Now, let's have a better understanding of these DR Scenarios mentioned above.


1. Active-active - The secondary backup infrastructure is a replica of the original site's structure, size, and services. This allows you to provide you the highest performance, high availability, and recovery time when compared to the other DR scenarios described. The cost, however, is exactly double that of the major infrastructure.


In an AWS multi-region system, the Active-Active state can provide not just fail-over but also load balancing. We may utilize Route 53 and the Weighted Routing Policy to balance the load.


When a disaster strikes or if the whole region fails, Route 53 will direct all traffic to the secondary site. There is no requirement for infrastructure scaling because both primary and secondary were running at full capacity prior to the tragedy.



2. Warm Standby -  The secondary environment uses the same infrastructure as the major one, but with lesser components to save cost. If the primary infrastructure has an extra large EC2 instance, the secondary location would have a medium size EC2 instance.


When a disaster happens, the smaller version(s) may be immediately ramped up to provide an infrastructure identical to the larger one in less time than the Pilot Light technique.





3. Pilot Light - Only the most important core infrastructure is run in the secondary environment. When it's time to recover, you may quickly provision a full-scale production environment around the key core.


Because the fundamental elements of the system are already functioning and are constantly maintained up to date, the pilot light technique provides a faster recovery time than the backup-restore method.


The database is operational in given Figure , but the Application Server is not.

You can use one of the following techniques to restore dormant components and scale up operating components:


  • Launch your EC2 instances from any modified AMIs.

  • If necessary, scale up database instances. Add any fail-over functionality to both dormant and active components (Multi-AZ, etc)

  • Set the Route 53 DNS to point to the secondary site.




4. Backup-Restore - In the AWS ecosystem, there are 5 options available for applying backup-restore strategies. 


  • Amazon S3 – Amazon S3 is an excellent place for backup data that may be required shortly for a restore.

  • Amazon Storage Gateway – Allows you to backup your on-premise data volumes by transparently copying snapshots into S3. Cached volumes allow you to store primary data on S3 while keeping frequently requested data local for low-latency access. VTL backup can be utilised in place of typical magnetic tape backup.

  • Amazon Glacier — Glacier may be used with S3 to provide a tiered "long-term" backup solution.

  • Amazon Import / Export – This feature allows you to transfer very big data sets by delivering storage devices straight to AWS.



When it comes to recovering data from EC2 instances, a mix of the following methods can be used.


  • Restoring data from S3

  • Provisioning the instances from an AMI



Comparison between listed Strategies:


  • Active-Active: Expensive (costs twice as much), yet recovery is faster than any other DR scenarios (near nil recovery time / RTO).


  • Warm Standby is more expensive, but it recovers faster than "Pilot Light."


  • Pilot Light: Less expensive, but recovers faster than "Backup and Recovery."


  • Backup and Recovery: Low cost, but recovery is slow (High RTO)



To Conclude…


Choosing a DR scenario among the ones described above is solely based on the criticality and expense of the system under consideration. As previously stated, the Active-Active strategy provides the best RTO at a significant expense. If money is a critical consideration, you can select one of the other three alternatives offered.


By, Vishal More, Cloud Consultant, Cloud.in



Tuesday, 24 January 2023

Monitoring Log Files And Memory Utilization Using Cloudwatch Agent On AWS

 


In this blog, we will see how to monitor log files, CPU, and memory utilization using the Amazon Cloudwatch agent.

For this use case, we will be using an Ubuntu-based EC2 instance. Installation of the Amazon Cloudwatch agent differs only based on the operating system, the rest of the steps are similar.

We will be monitoring the apache web server access log files which are located at /var/log/apache2 location


Step 1

Create an ec2 role for the cloudwatch agent and SSM access


Create a role for ec2 with the following two policies.

  • AmazonEC2RoleforSSM

  • CloudWatchAgentServerPolicy


Attach this role to the EC2 instance. This role will allow the cloudwatch agent to send logs to the cloudwatch service and also enable SSM access.






Step 2

Install the apache2 server on the instance


  • apt-get install apache2 -y

  • service apache2 start

  • service apache2 status

  • cat /var/log/apache2/access.log




Step 3

Download and install cloudwatch agent on your instance ( Docs )

Download the Cloudwatch agent .deb file from the following link.

  • https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb





Step 4

Install cloudwatch-agent by running the installation file


  • dpkg -i -E ./amazon-cloudwatch-agent.deb






Step 5

Start the amazon cloudwatch agent


  • /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a start




Step 6

Configure the cloudwatch agent using the wizard


  • /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard









Step 7

Provide the generated config.json file created by the wizard to the cloudwatch agent setup

Once we run the wizard, a config.json file is generated. This config file needs to be provided to the Cloudwatch agent. In response to this config file, the Cloudwatch agent creates a config.toml file automatically.

The config file is generated at /opt/aws/amazon-cloudwatch-agent/bin/ location.


Command


  • /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json



Step 8

Restart cloudwatch agent


  • /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a stop

  • /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a start




Step 9

Check logs in the cloudwatch service


Check the cloudwatch log groups tab. You can find the log group created. And also check the all metrics tab, you can see the cwAgent tab







Additional Docs


Troubleshooting Docs


Agent Log Files Doc




Summary

In this post we have seen how we can monitor log files and monitor memory utilization using Amazon Cloudwatch agent. First we created a role consisting of 2 policies. These policies provide permission for SSM and the Cloudwatch agent. Next we downloaded the Cloudwatch agent installation file and had it installed. Following that, we created the config.json file by running the Cloudwatch agent wizard. Next we provided the config.json file to the Cloudwatch agent. Lastly, we restarted the agent.


By: Shubham Kumar, DevSecOps Engineer (Cloud.in)





Friday, 13 January 2023

Tips to Optimize Your AWS Costs and Maximize Your ROI

As organizations strive to cut costs while still making the most of their cloud investments, cost optimization on AWS has become a major focus. Whether you’re an AWS veteran or just getting started, there are several methods you can use to optimize your cloud costs and Maximize Your ROI.

1. Right-Sizing: The first step in AWS cost optimization is to ensure that your resources are sized correctly for your workloads. This means that your resources should be scaled up or down depending on the current usage needs of your applications. Right-sizing means that you are not over-provisioning resources that you don’t need, or under-provisioning resources that you do need. 

2. Reserved Instances: Reserved Instances are a great way to save money on AWS. By reserving instances for a specific period of time, you get a discounted rate on the instances, which can add up to significant savings over the long term.

3. Savings Plans: AWS Saving Plans are a great way to optimize your cloud costs and take advantage of the economies of scale that come with using the cloud. By subscribing to a Savings Plan, you’re committing to a specific amount of compute, but you’re not locked into any particular instance type or term length. This gives you the flexibility to switch between instance types and commit to different term lengths, as your needs change. Compute Savings Plans apply to usage across Amazon EC2, AWS Lambda, and AWS Fargat.

4. Spot Instances: Spot Instances allow you to bid on unused EC2 capacity and can be a great way to reduce costs. Spot Instances can be a cost-effective way to run batch jobs or other applications that can tolerate interruption, as they can be terminated at any time.

5. Automation: Automating your cloud operations can save you time and money in the long run. This includes automating instance creation, scaling, and retirement. Automation can help you optimize your resources by ensuring that they are only running when they are needed. 

6. Optimize Storage: Optimizing your AWS storage can help you save money. S3 Analytics can be used to analyze storage access patterns and leverage the most cost-effective storage types for your data, such as S3 Standard vs S3 Infrequent Access or Glacier storage for long-term archival data.

7. Monitor Spending: Monitoring your spending on AWS is essential for cost optimization. This includes keeping track of usage and costs for each of your resources, and setting up cost alerts to notify you when costs exceed certain thresholds.

8. Very low activity Resource waste: Services with very low activity such as EBS volumes, Idle Load Balancers and network interfaces can be identified using Cloudwatch Matrices, Trusted Advisor Checks and VPC flow logs. Costs can be significantly reduced by terminating or consolidating rarely used or unused services.

9. Cloudfront CDN for Internet Data Transfer Out: Data transfer out from S3, load balancers and compute resources can be routed through CloudFront CDN. Utilizing CloudFront Savings Bundle or CloudFront Private Pricing through Cloud.in can result in substantial savings in data transfer out costs. Additional benefits may include DDOS protection, caching and improved security posture.

10. AWS Postpaid Billing Services by Cloud.in: Cost savings can be achieved by utilizing AWS Postpaid Billing Services by Cloud.in. Our unique commercial engagement model allows customers to save substantially. Do connect with Cloud.in Sales Team for the same. 

By following these cost optimization methods, you can save money on your AWS cloud bills and ensure that you are getting the most value out of your cloud investments. So, make sure to take the time to review your cloud costs and implement the right cost optimization strategies for your organization.

Tuesday, 3 January 2023

A thorough look at AWS DMS

The AWS Database Migration Service helps you move your data to AWS and other cloud providers. Here is a review of the service.


Streamlining business operations and data management is becoming even more essential, especially as more customers demand speed from their vendors. Thus, businesses are increasingly looking for better ways to manage business data at the lowest price possible to maximize profits. AWS Database Migration Service, or AWS DMS, is one such tool that can help businesses with database migration and cloud data management.

Like any forward-thinking organization, Amazon has expanded its cloud offerings and found great success with AWS products and services. AWS DMS is one of many solutions the company offers to assist with cloud data migration and management. In this review, learn more about AWS DMS’s features, how it works and possible alternatives.

What is AWS Database Migration Service?

AWS DMS is a cloud-based service that lets you easily move all of your data stores. The movement doesn’t have to be to Amazon’s cloud, either; it’s useful for users with other on-site and digital setups as well.

In addition, DMS offers a schema conversion and fleet advisor to make this process more accessible. You can use the DMS Fleet Advisor to manage your servers and physical databases. After collecting the information, it will create an inventory of everything before you officially move to the AWS cloud. The DMS Schema Conversion Tool takes these analytics and changes them to a new target engine.

How does AWS DMS work?

AWS DMS runs replication software while you tell it where to pull data from and where to upload it. You’ll then be able to schedule a task on the server, so it continues migrating your information. If you don’t have the necessary primary keys or tables, AWS DMS will create them for you, making the process much smoother. However, you can complete this step yourself, if you prefer.

AWS DMS pros and cons

AWS DMS pros

Because of the Fleet Advisor and Schema Conversion Tool, you can have your migration up and running minutes after downloading the service. It also lets you pay for resources as you use them. This ability is especially crucial as some countries head toward recession and industries rework their financial plans to deal with economic uncertainty.

AWS DMS offers three types of migration: full load only, change data capture (CDC) only, and full load and CDC. Full load only and CDC only will migrate the information in your database or the changes to it, respectively. CDC and full load will perform both processes and monitor the database as it works. Having the flexibility to choose among these migration plans is especially important for companies with large data stores that don’t want to pause workloads.

The most significant benefit of AWS DMS is that it is serverless and can handle all of the services you need to move your information automatically. It’s incredibly scalable, too, so you can adjust the process as you go. Another great feature is its backup replication server, which will quickly start working if the main one crashes. The backup should begin operating without interrupting the migration.

AWS DMS cons

Naturally, the migration process will sometimes slow down if you’re moving a lot of data. However, some AWS DMS users have reported missing information following data replication. If this happens to you, it will require manual intervention to resolve, which could lengthen the migration timeframe.

DMS users may also experience some lag issues with high throughput data. AWS DMS can load eight parallel tables to speed up performance with extensive replication servers. However, if there’s a lot of throughput information, the copying could slow down. It may require coding, which someone on your internal team will have to know how to do.

AWS DMS pricing

AWS DMS is a pay-as-you-go solution. It has hourly charge rates depending on the amount of data you’re moving. For instance, it could cost as low as $0.018 per hour if you move t3.micro instance types. However, the price jumps significantly to $14.43456 per hour when moving r6i.32xlarge. You’ll also pay either $0.115 or $0.23 per gigabyte of additional log storage a month, but the service offers 100GB to start.

Beyond the pay-as-you-go pricing information above, you may need to pay for T3 CPU credits, which run at $0.075 per vCPU-hour.

If you would like a thorough estimate of your spending, Amazon offers an AWS Pricing Calculator, or you can speak with a representative for a personalized quote.

AWS DMS alternatives

As with any healthy marketplace, there are many alternatives to AWS DMS you can consider. These are some of the most highly rated among users:

  • Acronis Cyber Backup
  • Supermetrics
  • Veeam Backup & Replication
  • Fivetran

Is AWS DMS right for you?

You and your data team know what’s best for your company. If you’re thinking about using AWS DMS for an upcoming data migration project, talk with your internal team of data experts and see what they think. It may be worthwhile to experiment with the tool using the AWS Free Tier before you make a commitment.

Courtesy: https://www.techrepublic.com/










Monday, 2 January 2023

Tips and tricks for securing data when migrating to the cloud

Find out how you can have a safe and secure transition to the cloud. This guide describes tips and steps to take to ensure your data is secure during a migration.


More and more organizations are moving mission-critical systems and data to the cloud. While migration to and between all types of cloud services poses security challenges, migration to and between public cloud services presents the greatest security challenge, with potentially dire consequences.

In this guide, we’ll cover some of the most common security threats companies face during cloud migration as well as best practices you can follow to combat these threats.

Is data in cloud migration secure?

According to the Flexera State of the Cloud Report 2022, public cloud adoption continues to accelerate, with half of all study respondents’ workloads and data residing in a public cloud. As a consequence of this growth, there are also growing concerns about data security during cloud migration.

Some of these security concerns include the following.

API vulnerabilities

The application programming interfaces used to connect cloud applications, data and infrastructure can be a major source of vulnerability for cloud data security. APIs may have weak authentication and authorization controls, a lack of sandbox protection, and excessive privileges. Organizations should carefully assess these vulnerabilities when migrating data to the cloud.

Security blind spots

Cloud data can also be at risk because of security blind spots in the cloud infrastructure. Issues such as using software-as-a-service applications for sensitive data and creating shadow IT networks are common in some cloud environments. Organizations should be aware of these potential vulnerabilities when migrating to the cloud and take steps to mitigate them.

Compliance requirements

Many organizations must comply with regulatory requirements when migrating data to the cloud. Security compliance requirements can be a significant challenge for organizations, especially if the cloud provider does not meet these requirements.

Data loss

Finally, migrating data to the cloud can increase the risk of data loss. This is especially true if the cloud provider does not have robust controls in place to protect and recover data in the event of a security incident.

Tips for securing data in cloud migrations

While there are many potential security problems that can arise during a cloud migration, there are also several steps your team can take to better protect your applications and data. We recommend the following seven tips to protect your organization’s data during cloud migrations.

Understand your data

Companies preparing for a cloud migration need to make sure they have an accurate understanding of their data and its requirements. That means migration teams must be aware of their data’s present and future usage as well as storage and retention policies established by the company’s data governance framework.

Various cloud management tools are available to assist with some of these data understanding and optimization tasks, including data deduplication software. Securing cloud data starts with understanding what it contains and how it will eventually be used and/or disposed of.

Understand your data compliance requirements

In addition to understanding the data itself, organizations need to be aware of any compliance requirements that apply to their datasets during cloud migrations.

For example, many enterprises are subject to regulatory frameworks such as GDPR, PCI-DSS and HIPAA, which include strict requirements for the stripping of personally identifiable information before data migration.

Organizations must ensure cloud infrastructure providers meet compliance requirements or implement additional controls where needed.

Secure your APIs

When migrating data to the cloud, securing the various APIs that control access to and between cloud applications and infrastructure is essential. For enhanced API security, you can start by using strong authentication and authorization controls, protecting APIs from malicious or automated attacks, and eliminating excessive user access privileges.

Encrypt your data during transit

Transmitting data in cloud migrations can create additional security vulnerabilities. One effective way to protect sensitive information is using end-to-end encryption.

This process is usually done using an encryption protocol like Transport Layer Security, which adds an additional layer of security by encrypting all data before it leaves the source system and decrypting it after it arrives in the destination system. Various encryption algorithms are available to choose from depending on the amount of protection you need, but most use modern industry standards like AES or RSA.

Companies should also be sure to securely store any encryption keys and credentials necessary for access and make regular backups in case of data loss. Utilizing a cloud provider that offers built-in encryption services can simplify this process. However, companies should still conduct their due diligence to ensure they have the proper tools and security measures before initiating the migration.

Restrict data access during cloud migration

Restricting access to data during cloud migration is a crucial step for businesses seeking to transfer their information securely. You should take multiple steps to ensure only intended users can access the data as necessary. These steps include:

  • Implementing and enforcing user-level authentication and authorization rules
  • Setting up robust two-factor authentication processes
  • Using built-in security policies from the cloud provider
  • Enabling encryption of all data before the transfer
  • Auditing who has access regularly over the migration period
  • Completing periodic vulnerability scans on systems with sensitive information during the migration
  • Deleting any credentials or access keys associated with terminated employees

Consider a phased migration strategy

It’s never a good idea to migrate data in one go, especially when dealing with large volumes of sensitive information. A phased migration strategy can help avoid data loss or other security issues and allows organizations to establish processes that prevent unauthorized access while data is in transit.

Additionally, it’s typically easier to implement security measures at a small scale and then expand them as needed over time, which allows companies to proactively identify and address potential risks before they become a bigger problem.

Implement decommissioning and sanitization activities

Decommissioning refers to examining all of your devices, drives and servers that remain in your data center. Have a checklist that documents all of that hardware, so you can be sure to remove everything from your current cloud or on-premises storage servers.

You should also ensure any data stored in off-site locations is securely deleted. Additionally, it can be helpful to conduct a security audit of your cloud infrastructure provider to make sure they have robust security measures in place to protect and monitor their systems.

How can you prevent data loss during cloud migration?

There are several measures businesses can take to help prevent data loss during cloud migrations, including:

  • Utilizing robust encryption and authentication tools for data in transit
  • Restricting access to sensitive data during migration and auditing who has access regularly
  • Backing up critical data in a system that is not central to your migration plan
  • Utilizing a phased migration approach that allows for gradual and controlled transitions
  • Implementing security measures like decommissioning, which involves removing and sanitizing all devices, drives and servers from the source system
  • Working with a cloud provider with built-in security measures and protocols to ensure data is protected throughout the migration process

By taking proactive steps to secure data during cloud migrations and carefully planning the migration process to adhere to regulatory requirements, businesses can ensure their most critical assets are not lost or compromised during the process.

Courtesy: https://www.techrepublic.com/


The Next Big Shift In Project Management : 2026 Predictions

Although project management has always been dynamic, the rate of change in the field today is unparalleled. The discipline is approaching a ...