Tuesday, 17 May 2022

AWS Elemental MediaTailor now supports live sources in Channel Assembly

 You can now schedule live content into a linear channel created using Channel Assembly with AWS Elemental MediaTailor. You could already re-use transcoded and packaged HLS and DASH streams from your existing video on demand (VOD) catalogs and now you can use live streams from an origin such as AWS Elemental MediaPackage as scheduled sources for a linear channel.

For VOD only channels, you can create a basic channel which is priced at the same existing Channel Assembly rates. When using VOD content and live sources, you need to create a standard channel configuration which has a higher per hour cost. Visit the MediaTailor pricing page for more details on basic and standard channel costs.

Using Channel Assembly with AWS Elemental MediaTailor, you can create linear channels that are delivered over-the-top (OTT) in a cost-efficient way, even for channels with low viewership. Virtual linear streams are created with a low running cost by using existing multi-bitrate encoded and packaged content which now can be either VOD or Live. You can also monetize Channel Assembly linear streams by inserting ad breaks in your programs without having to condition the content with SCTE-35 markers for VOD or Live sources, as the SCTE-35 ad break information is simply passed through.


About Cloud.in:

Cloud.in is an AWS Advanced Consulting Partner that delivers AWS managed services that cater to every business need and aims in simplifying the AWS Cloud Journey. Our Team is the driving force behind Cloud.in with the experience, knowledge, and skills that they behold in making cloud computing and AWS Cloud a pleasant experience.


Ref: https://aws.amazon.com/about-aws/whats-new/2022/04/aws-elemental-mediatailor-live-channel-assembly/





Evolution of cybercriminals’ attacks on cloud native environments revealed

Attackers are finding new ways to target cloud native environments, according to Nautilus, the threat research team of cloud native security provider, Aqua Security.



The team’s latest research shows that adversaries are adopting more sophisticated techniques, leveraging multiple attack components, and shifting attention to Kubernetes and the software supply chain. The “2022 Cloud Native Threat Report: Tracking Software Supply Chain and Kubernetes Attacks and Techniques” provides insight on trends and key takeaways for practitioners about the cloud native threat landscape.

The study revealed that adversaries are engaging with new tactics, techniques and procedures (TTPs) to specifically target cloud native environments. While cryptominers were the most common malware observed, with increasing frequency, Team Nautilus discovered an increased usage of backdoors, rootkits, and credential stealers — signs that intruders have more than cryptomining in their plans. Backdoors, which permit a threat actor to access a system remotely and are used to establish persistence in the compromised environment, were encountered in 54% of attacks (up 9% compared with in 2020). Additionally, half of the malicious container images (51%) analyzed by researchers contained worms, which allow attackers to increase the scope of their attack with minimal effort (up 10% compared with 2020).

Notably, threat actors also broadened their targets to include CI/CD and Kubernetes environments. In 2021, 19% of the malicious container images analyzed targeted Kubernetes, including kubelets and API servers, up 9% compared with the previous year.

Assaf Morag, Threat Intelligence and Data Analyst Lead, Aqua’s Team Nautilus, said: “These findings underscore the reality that cloud native environments now represent a target for attackers, and that the techniques are always evolving.

“The broad attack surface of a Kubernetes cluster is attractive for threat actors, and then once they are in, they are looking for low-hanging fruit.”

Other key findings:

The proportion and variety of observed attacks targeting Kubernetes has increased, and this includes a wider adoption of the weaponization of Kubernetes UI tools.
– Supply chain attacks represent 14.3% of the particular sample of images from public image libraries, showing that these attacks continue to be an effective method of attacking cloud native environments.
– The Log4j zero-day vulnerability was immediately exploited in the wild. Team Nautilus detected multiple malicious techniques, including known malware, fileless execution, reverse shell executions, and files that were downloaded and executed from memory – all emphasizing the need for runtime protection
– Researchers observed honeypot attacks by TeamTNT after the group announced its retirement in December 2021. However, no new tactics have been in use, so it is unclear if the group is still in operation or if the ongoing attacks originated from automated attack infrastructure. Regardless, enterprise teams should continue preventative measures against these threats.

Aqua’s Team Nautilus made extensive use of honeypots to investigate attacks in the wild, and to investigate supply-chain attacks against cloud native applications, the team examined images and packages from public registries and repositories, such as DockerHub, NPM and Python Package Index. Team Nautilus utilised Aqua’s Dynamic Threat Analysis (DTA) product to analyse each attack. Aqua DTA is the industry’s first container sandbox solution that dynamically assesses container image behaviours to determine whether they harbour hidden malware. This enables organizations to identify and mitigate attacks that static malware scanners cannot detect.

“The key takeaway from this report is that attackers are highly active — more than ever before — and more frequently targeting vulnerabilities in applications, open source and cloud technology,” said Morag. “Security practitioners, developers and devops teams must seek out security solutions that are purpose-built for cloud native. Implementing proactive and preventative security measures will allow for stronger security and ultimately protect environments.”

To ensure cloud environments are secure, Aqua’s Team Nautilus recommends implementing runtime security measures, a layered approach to Kubernetes security, and scanning in development.

About Cloud.in:

Cloud.in is an AWS Advanced Consulting Partner that delivers AWS managed services that cater to every business need and aims in simplifying the AWS Cloud Journey. Our Team is the driving force behind Cloud.in with the experience, knowledge, and skills that they behold in making cloud computing and AWS Cloud a pleasant experience.

Ref: https://www.cloudcomputing-news.net/

Thursday, 5 May 2022

Cloud Computing Can Help Life Sciences Catch Up Before It’s Too Late

 Biotechnology has the power to tackle global threats such as pandemics and climate change, but antiquated experimental and data collection methods are holding it back. Guy Levy-Yurista, CEO of the experiment platform developer Synthace, explains how cloud computing could accelerate the efforts of biotech companies to solve the big problems. 


There is a sundial in Redu, a sleepy village in rural Belgium, that is a warning to us all. Next to an engraving of DNA’s familiar double helix, the words “tempus fugit, augebitur scientia” are carved in stone. Translated, it tells us that as time flies, knowledge will increase.

It is an optimistic sentiment, but it is also a warning. Yes, we will grow in our knowledge of the world around us as time goes on, but the question that follows, at least for me, is: how fast will that happen? Perhaps the current answer to that is “not fast enough.”

Here’s another way to ask the same question: is science going to reach its true potential before the end of this decade? My answer: only if we set it free. We have to find a way to unleash science from everything that holds it back in order to address a multitude of crises facing society, including antibiotic resistance, pandemics, and climate change.   

And make no mistake, the challenge ahead of us is massive. Ours is an industry under incredible, accelerating pressure to tackle biological complexity, move faster to scientific insight, and improve data reproducibility. But we must fight against the old, difficult ways of doing things; the legacy technology and processes that can suffocate the discoveries we know are within arm’s reach.

If you don’t believe me, ask yourself: among all of these challenges, are our biologists empowered to work in the best way possible? Right now, I would say it is the opposite. Researchers are limited by the need to be in one place — in the lab — tethered to their lab station and looking after demanding equipment that traps them in a vicious cycle of menial tasks. Science is too often fixed in one location and held hostage by limits that we, as mere humans, have placed upon it.

Why is this? Too much friction exists between the biologist and the science they want to do.  Too much relies on manual intervention, which introduces error and strangles progress. If we want to deliver better drugs, better biotech, better climate tech, better food tech, then we have to find a way to reduce that friction by removing our dependence on manual involvement. We have to uncouple the imagination and the creativity of our best minds from the limitations of the physical spaces that we depend on right now. And the clock is ticking.

The solution is all around us: the cloud. We have to leverage next-generation, cloud-based automation technologies by fixing the missing link between those technologies and the physical world itself. To make this connection, we need a reliable way to represent biological work with code. If we can do this we can then represent experiments themselves in a digital format. Better yet, we can digitalize, and therefore unleash, science itself.

When we can do that, the connections to every other digital medium open up. The true power of artificial intelligence, machine learning, and eventually quantum computing become available for the life sciences, enabling the realization of its full potential.

‘Representing experiments with code’ is easier said than done though. It doesn’t just happen overnight. Who should write this code? Is it the biologists? Do we ask them to become computer scientists? No, this would be a tragic mistake. Biologists don’t want to spend their time coding; they want to spend their time doing science.

At the most basic level, scientists need tools that reduce the steps between them and the goals they’re going after in their experiments. The tools should be intuitive, letting researchers guide themselves towards what they want to achieve, suggesting templates to save time, and helping them discover new and creative ways to do their work. Even better, they should help them produce the highest quality data that is primed for cloud computing and every other technology that can be connected to it.

While there are many platforms currently helping to digitize the record-keeping or purely automation-based elements of the experimental process, we’ve seen very few others moving into the experiment digitization space in the same way we have. Translating modular experiments into automation instructions, the platform our company is building bridges a yawning gap between experimental design and the context-rich data scientists need to move forward.

Every scientist is driven by the desire to find solutions to humanity’s hardest problems. Right now, they’re rarely given the best environment to translate their boldest ideas into scientific reality. Soon this is all going to change. Armed with next-generation, cloud-native technology, they will solve humanity’s most pressing problems at speed.

To quote the poet E.E. Cummings, “all ignorance toboggans into know.” The joy of scientific discovery is one that should be encouraged, empowered, and pursued with the strongest of intents. But the clock is still ticking. Tempus fugit.

Ref: https://www.labiotech.eu/

Tuesday, 3 May 2022

How Aurora Serverless made the database server obsolete

 

Version 2 of AWS' cloud-native relational database brings signficant improvements


SPONSORED FEATURE Application development has transformed in the last few years, with software built for the cloud serving highly volatile workloads from globally distributed users at scale. Serverless computing evolved to support these applications, eliminating traditional performance and capacity management concerns altogether.

The venerable relational database model, which still supports millions of applications across the world today, must keep up with advances in serverless computing. That's why Amazon created a serverless version of its Aurora cloud-native relational database system to support evolving customer demands.

"More customers want to build applications that matter to their end users instead of focusing on managing infrastructure and database capacity," explains Chayan Biswas, principal technical product manager at Amazon Web Services. "Serverless computing is a way they can achieve that very easily."

A history of serverless computing

Amazon first introduced its Lambda serverless computing concept in 2014. It was a natural evolution of the virtualization trend that had gone before it, which eliminated the need to run each application on a separate physical server by abstracting operating systems away from the hardware.

Virtual machines are far more efficient than dedicated hardware servers, compressing applications' computational footprint. But many applications don't run constantly, only needing to operate in response to other events. This is especially true when you break monolithic applications into container-based services.

Lambda serverless computing uses Amazon's Firecracker microVM framework under the hood. It enables developers to call a function without running it on a server and retrieve the result. The underlying framework takes care of the rest.

This offers at least two benefits. The first is that not running a dedicated virtual machine or container for the function reduces the cost of operation. The second is that the underlying container-based infrastructure can quickly scale the function's capacity, maintaining performance even as the volume of events that call it increase.

The birth of the serverless database

Lambda supports cloud-based applications, but customers wanted the next logical step: support for serverless databases. AWS took a serverless version of Aurora MySQL into general availability in 2018, followed by a version for PostgreSQL in 2019.

Amazon Aurora Serverless translates the cost and performance benefits to the database, enabling customers to scale their relational data workloads quickly without disruption. They also pay only for the database resources they use, which is especially useful for applications that don't call the database frequently, like a low-volume blog or development and testing databases.

Beyond this, serverless database operations also save DBAs from having to provision and manage database capacity. This is part of a workload that Amazon calls 'undifferentiated heavy lifting', which is mundane work that doesn't make full use of a DBA's skills. Using serverless automation to abstract this enables DBAs to concentrate on more important tasks like database optimization and data governance.

Before the move to Aurora Serverless, customers would have to scale their databases by manually changing the type of virtual machine that their system ran on. That created an additional management overhead and also took the database down for up to 30 seconds, which was unacceptable for many users.

Instead, customers would constantly provision their databases for peak workloads. This was expensive and wasted resources, causing them to pay for large VMs that would sit partly idle for large periods of time.

How Aurora Serverless works

Amazon Aurora Serverless v1 changed everything by enabling customers to resize their VMs without disrupting the database. It would look for gaps in transaction flows that would give it time to resize the VM. It would then freeze the database, move to a different VM behind the scenes, and then start the database again.

This was a great starting point, explains Biswas, but finding transaction gaps isn't always easy. "When we have a very chatty database, we are running a bunch of concurrent transactions that overlap," he explains. "If there's no gap between them, then we can't find the point where we can scale."

Consequently, the scaling process could take between five and 50 seconds to complete. It could sometimes end up disrupting the database if an appropriate transaction gap could not be found. That restricted Aurora Serverless instances to sporadic, infrequent workloads.

"One piece of feedback that we heard from customers was that they wanted us to make Aurora Serverless databases suitable for their most demanding, most critical workloads," explained Biswas. That included those with strict service level agreements and high-availability needs.

Improving serverless database services

With that in mind, version two of Aurora Serverless brings some significant improvements, including a new approach that lets it scale to thousands of transactions in seconds. AWS achieved this by providing the database process with more resources, over-provisioning them under the hood. That eliminates the need to find a gap in database traffic because the serverless process doesn't move between different VMs to scale.

That might seem like a losing proposition on Amazon's side, because the company has to absorb the cost of that over-provisioning. AWS is used to finding new internal efficiencies using its economy of scale, though. To improve scalability in Aurora Serverless v2, it got smarter about workload placements.

The company can now place workloads with complementary profiles on the same machine. A reporting workload that runs at night could run on the same VM as a business application that operates during the day, for example. That's a benefit of the cloud's multi-tenant operating model.

Serverless v2 also scales in finer-grained increments. V1 customers could only double their provisioned amounts of the database computing unit, known as the Aurora Capacity Unit (ACU), when usage exceeded a set threshold. Aurora Serverless v2 allows increases in .5 ACU increments.

There are also improvements in other areas, including availability. Although high-availability for storage is standard, Aurora Serverless v1 doesn't offer high availability for compute. V2 offers configuration across multiple availability zones. It will also support read replicas across those instances for faster record retrieval, along with Aurora Global Database support for read-only workloads. This means faster data replication across regions and failover times of under a minute for increased reliability in an Aurora Serverless 2 environment.

RDS Proxy

Amazon has also introduced technology that reconciles a fundamental difference between the serverless operating model and relational database principles.

DynamoDB, Amazon's managed NoSQL key-value database, is already serverless because of its underlying architecture. You can easily introduce auto-scaling rules directly in the web interface when setting up DynamoDB tables.

Things are different with Aurora because of the way that relational databases set up connections, Biswas explains.

"In a serverless environment, a Lambda function runs and then it's done," he points out. "Relational databases tend to be persistent."

Relational databases are typically stateful, maintaining a single connection to an application over time so that they don't have to waste time setting things up again every time the application makes a query. Serverless computing is a stateless concept that creates and rips down connections as needed.

Applications using modern container architectures are designed to scale quickly. If every container-based function opens a connection to a database, the relational engine will spend all its time managing connections rather than serving queries.

At AWS re:Invent 2019, Amazon launched RDS Proxy, a service to solve the connection problem. The service, which entered general availability in June 2020, sits between the application and database and pools connections. Instead of bombarding the database server, container-based applications connect to the proxy, which can hand out connections from the pool. It supports serverless Lambda functions, Kubernetes containers, or any other stateless application that doesn't natively support connection pooling.

Lambda integration

AWS doesn't just support efficient access to Aurora Serverless from AWS Lambda functions; it supports the reverse. Lambda integration lets customers invoke serverless functions from within the database. That lets developers write business logic in a Lambda function, which supports various languages, rather than writing stored procedures in a procedural dialect of SQL.

Lambda integration does more than give developers more flexibility. It also puts compute power outside the database, allowing it to concentrate on queries rather than impeding its performance by running embedded application logic. Finally, it simplifies application workflows. For example, a developer can have Aurora call a machine learning model directly as a Lambda function rather than coding that request into their application.

Amazon continues to make advances with its serverless database applications. Amazon Aurora Serverless v2 with PostgreSQL and MySQL compatibility GA'd last week at AWS Summit in San Francisco. "We will essentially support all of the features in Aurora with Aurora Serverless v2," Biswas concludes. Soon, for many customers, the concept of a database server could be an anachronism.

About Cloud.in:

Cloud.in is an AWS Advanced Consulting Partner that delivers AWS managed services that cater to every business need and aims in simplifying the AWS Cloud Journey. Our Team is the driving force behind Cloud.in with the experience, knowledge, and skills that they behold in making cloud computing and AWS Cloud a pleasant experience.

Would you like to know more? Please reach out to us at sales@cloud.in / +91-20-6608 0123.

Ref: The Register

Gartner Forecasts Worldwide Public Cloud End-User Spending to Reach Nearly $500 Billion in 2022

 IaaS, DaaS and PaaS to Witness Highest Spending Growth This Year

Worldwide end-user spending on public cloud services is forecast to grow 20.4% in 2022 to total $494.7 billion, up from $410.9 billion in 2021, according to the latest forecast from Gartner, Inc. In 2023, end-user spending is expected to reach nearly $600 billion.

“Cloud is the powerhouse that drives today’s digital organizations,” said Sid Nag, research vice president at Gartner. “CIOs are beyond the era of irrational exuberance of procuring cloud services and are being thoughtful in their choice of public cloud providers to drive specific, desired business and technology outcomes in their digital transformation journey.”

Infrastructure-as-a-service (IaaS) is forecast to experience the highest end-user spending growth in 2022 at 30.6%, followed by desktop-as-a-service (DaaS) at 26.6% and platform-as-a-service (PaaS) at 26.1% (see Table 1). The new reality of hybrid work is prompting organizations to move away from powering their workforce with traditional client computing solutions, such as desktops and other physical in-office tools, and toward DaaS, which is driving spending to reach $2.6 billion in 2022. Demand for cloud-native capabilities by end-users accounts for PaaS growing to $109.6 billion in spending.

Table 1. Worldwide Public Cloud Services End-User Spending Forecast (Millions of U.S. Dollars)

 

2021

2022

2023

Cloud Business Process Services (BPaaS)

51,410

55,598

60,619

Cloud Application Infrastructure Services (PaaS)

86,943

109,623

136,404

Cloud Application Services (SaaS)

152,184

176,622

208,080

Cloud Management and Security Services

26,665

30,471

35,218

Cloud System Infrastructure Services (IaaS)

91,642

119,717

156,276

Desktop as a Service (DaaS)

2,072

2,623

3,244

Total Market

410,915

494,654

599,840


        

“Cloud native capabilities such as containerization, database platform-as-a-service (dbPaaS) and artificial intelligence/machine learning contain richer features than commoditized compute such as IaaS or network-as-a-service,” said Nag. “As a result, they are generally more expensive which is fueling spending growth.” 

SaaS remains the largest public cloud services market segment, forecasted to reach $176.6 billion in end-user spending in 2022. Gartner expects steady velocity within this segment as enterprises take multiple routes to market with SaaS, for example via cloud marketplaces, and continue to break up larger, monolithic applications into composable parts for more efficient DevOps processes.

Emerging technologies in cloud computing such as hyperscale edge computing and secure access service edge (SASE) are disrupting adjacent markets and forming new product categories, creating additional revenue streams for public cloud providers.

“Driven by maturation of core cloud services, the focus of differentiation is gradually shifting to capabilities that can disrupt digital businesses and operations in enterprises directly,” said Nag. “Public cloud services have become so integral that providers are now forced to address social and political challenges, such as sustainability and data sovereignty.

“IT leaders who view the cloud as an enabler rather than an end state will be most successful in their digital transformational journeys,” said Nag. “The organizations combining cloud with other adjacent, emerging technologies will fare even better.”

Gartner clients can read more in Forecast: Public Cloud Services, Worldwide, 2020-2026, 1Q22 Update.  Lean more in the complimentary Gartner webinar Cloud Computing Scenario: The Future of Cloud. 

About Cloud.in:

Cloud.in is an AWS Advanced Consulting Partner that delivers AWS managed services that cater to every business need and aims in simplifying the AWS Cloud Journey. Our Team is the driving force behind Cloud.in with the experience, knowledge, and skills that they behold in making cloud computing and AWS Cloud a pleasant experience.

Thursday, 20 January 2022

Why enterprises must embrace automation to boost cloud security

It is a multi-cloud world, and the nature of multi-cloud environments makes it extremely challenging for enterprises to ensure security. Firstly, from a user access perspective, a multi-cloud environment makes it challenging to make access control secure. Maintaining multiple user access systems and ensuring a consistent access policy is a huge challenge for every administrator. Another common security issue faced by enterprises is misconfiguration of security settings. Misconfiguration happens when default cloud credentials are left unchanged or if excessive permissions are given.

Though cloud misconfiguration is one of the most common errors exploited by cybercriminals, there are other significant threats. The Cloud Security Alliance, for example, lists, lack of cloud architecture and security; insufficient identity, credential, access, and key management; account hijacking; insider threats; insecure interfaces and APIs; weak control plane; Limited Cloud Usage Visibility and Abuse and Nefarious Use of Cloud Services, as other significant threats. Many enterprises mistakenly assume that the same security settings that have worked for them on- the premise will work in the cloud environment too. It is also common for many enterprises to leave the default credential settings unchanged. But as Gartner has rightly pointed out, "Through 2022, at least 95% of cloud security failures will be the customer’s fault.” This means that customers are responsible for securing the databases or applications that they host on the cloud.




How automation can help?

Given the complexity of cloud environments and the challenges associated with securing a multi-cloud environment, it is imperative for enterprises to seek out ways to secure effective ways of securing cloud deployments. This is where automation can be of great advantage. Cloud automation helps in eliminating any human errors that may have occurred, which has resulted in causing the cloud-based infrastructure to be insecure. For example, as changes are made across clouds, a cloud automation platform can monitor the changes to the configurations and check if they adhere to the required compliance and security best practices. Cloud automation platforms can also help in automatically configuring different components of the cloud security ecosystem such as networks, access points, or firewalls. This helps in significantly eliminating many of the manual errors that are common in a multi-cloud environment.

In an environment where enterprises face a huge amount of risks from zero-day vulnerabilities, automation can help them patch and update servers quickly. Patching activities can be done automatically on a huge number of servers without any manual intervention. This helps administrators in quickly patching servers if a serious vulnerability has been discovered. Automation can also help in providing a centralized view across multiple cloud environments. Enterprises can use centralized dashboards provided by service providers to stay compliant and enforce permissions based on their roles. Identity and access management is one of the major challenges in a cloud-based environment. This is where automation can be of huge advantage and provide a lot of value by reducing security risks. Similarly, automation can help in analyzing the network continuously for any suspicious or malicious behavior, which can be extremely useful for preventing attacks. Using automated security tools, enterprises can also run bots that continuously monitor the complete cloud ecosystem for any policy violations and auto-alerting enterprises for taking remedial actions.

In Summary, cloud security automation can raise the bar for security. From ensuring standardization in applying consistent policies to improve efficiencies by enabling even smaller security teams to scan and test multiple cloud instances and servers for security vulnerabilities and patching them, cloud security automation can give enterprises a big advantage in ensuring security.

#cloud #security #partner #AWS #ManagedSecurity


Thursday, 30 December 2021

How born in the cloud firms can ensure a proactive security posture

  In 2021, India had a phenomenal year with respect to startups. The Indian Tech Unicorn Report 2021 said that India saw 46 unicorns (companies with a valuation of more than $1 billion). India also today has the world’s largest startup ecosystem in the world with more than 60,000 startups. 

A huge percentage of these startups relied on the cloud, thanks to the cloud’s natural ability to help startups achieve scalability at a lower TCO and access to the latest technologies. With a pay-as-you-go pricing model, the cloud has truly enabled early-stage start-ups or born-in-the-cloud firms to access technology services as required without the commitment of upfront capital. This allows startups to keep on experimenting with different services, and create a business model that can succeed in the market. Naturally, the cloud has acted as a big lever for startups to leapfrog without the restrictions of capital or scalability.

It is hence not surprising for startups to prefer cloud-based platforms, as they give them the ability to scale quickly on a pay-per-use model. That said, while the cloud gives born in the cloud enterprises to take the advantage of scale, this also exposes them to several vulnerabilities. Firstly, unlike the traditional model, cloud-based applications do not have fixed boundaries to guard.

Most born in the cloud firms do not have huge security teams, with growth and profitability as the main parameter. Security is almost like an afterthought. In the rush to release products faster than the competition, security is always a lower priority. Due to the limited number of security personnel, there is a lack of control and understanding on who can access sensitive or confidential information.

In many cloud-dependent firms, it is common to see information stored on applications that is beyond the control of the IT team. In most enterprises, the IT team is not even aware of these applications – this is referred to as ‘Shadow IT’. As these startups grow in scale, and workloads span multiple clouds, it becomes extremely challenging for these companies to secure their data.

While most cloud service providers have the best security processes and technologies to protect their infrastructure from cyber criminals, the onus of security still lies with the customer. Because of the ease of using the cloud, it is common for born in the cloud firms to underestimate the risks, which have often proved to be costly and have led to many data breaches. However, as a research firm, Gartner, rightly points out, the challenge of ensuring security lies in not in the security of the cloud, but around processes and policies related to the configuration of the cloud. In most cases, an improper configuration is the cause of data breaches.

The Cloud Security Alliance highlights some of the most common risks with respect to cloud security. Some of these include misconfiguration and inadequate change control; lack of cloud security architecture and strategy; insufficient identity, credential, access, and key management; account hijacking; insider threats; insecure interfaces and APIs; limited cloud usage visibility and abuse and nefarious use of cloud services. Not surprisingly, a research firm, Gartner says that by 2025, 99% of cloud security failures will be the customer’s fault. For example, cloud misconfigurations, one of the most common reasons for data breaches, happen when default credentials given by the service provider are left unchanged.


Ensuring a secure cloud

One of the most effective steps that are born in the cloud firms can take is to encrypt their data. This includes encrypting static or data in motion. If this is done, the stolen data will be of no use to any cybercriminal as it is unreadable without a key. It is also equally important to regularly scan for vulnerabilities such as SQL injections or hidden malware.

Additionally, as workloads can span multiple clouds, it is important for organizations to use cloud application security broker tools and multi-factor authentication. This can not only ensure one more layer of security but also reduce the possibility of data breaches.

As most born in the cloud firms have small teams, they can get overwhelmed by the challenge of ensuring security – especially if the organization is growing fast. Last year, a survey by Cloud Security Alliance, pointed out that nearly one-third of the respondents still managed cloud security manually. This problem is magnified for cloud-dependent firms. This is where automation can help. Automation tools can not only enforce standardized policies across every cloud ecosystem but also help in automatically configuring firewalls, networks, databases, or access points. Automation can also help in automatically patching servers or virtual instances, which makes it possible for even smaller firms to ensure robust security. Cloud governance tools can help in bringing in a centralized and unified view for managing workloads across multiple clouds. Cloud governance tools can also be used to ensure compliance with respect to different regulations in different countries.

Last, but not least, born in the cloud firms, can take the help of managed service providers, who have the required expertise, the credentials, the experience, and the skillsets to help these firms address the gaps in their security posture.

Going forward, as cloud-based usage will continue to be high, it is recommended that cloud-native firms take some of the steps as advised above, which can ensure that their growth is not hampered by any security-related issues.

#cloud #Security #Automation #ManagedService #partner #AWS #CloudInfra

AWS Elemental MediaTailor now supports live sources in Channel Assembly

  You can now schedule live content into a linear channel created using Channel Assembly with   AWS Elemental MediaTailor . You could alread...