Friday, 28 April 2023

Hybrid Multi-Cloud Why is everyone opting for it?


The current business environment is witnessing a decrease in the number of organizations using a single private or public cloud.  The key reasons for these dwindling numbers include avoiding vendor lock-in, reducing costs, flexibility, and the need for more cloud resources.  By opting for multi-cloud and hybrid cloud strategies, organizations can eliminate the unavailability of cloud resources due to single-point failure. 

However, enterprises prefer to retain certain databases either in the on-premises data center or in the private cloud. At the same time, they want to leverage the benefits of different public cloud vendors too.

Although leveraging the limited public and private clouds was done in the past, they weren’t as efficient and reliable, leading to the adoption of the hybrid multi-cloud strategy.  In the latter, organizations leverage various public cloud services from different providers to gain best-of-breed results and to avoid vendor lock-in. Dependency on a single provider is minimized.

Going cloud-smart with hybrid multi-cloud is the future

In today’s digitalized world of business, the cloud is playing a key role by transforming the way applications are built, stored, and leveraged.  Among, Public, Private, and Hybrid versions, the Hybrid approach provides a competitive edge. In a Hybrid computing approach, a combination of computing, services, and storage are used such as private cloud, on-premise data centers, public cloud, and even edge locations.  According to IBM Institute for Business Value, Hybrid Cloud generates 2.5 times greater business value than a single cloud platform approach.

The multi-cloud approach is said to be taken when organizations leverage cloud computing services from two or more cloud service providers or vendors for cloud hosting. It gives better control, security, and performance and provides an improvement in disaster recovery tools.

The software developers want to focus on development and leave the infrastructure and related managed services to public cloud providers. On the other hand, the Operations team looks forward to building a private cloud with the data center.

In such a scenario, a hybrid, multi-cloud strategy provides the solution with the right platform for the specific task and most experts believe this approach is that of the future.  Let us see the benefits of this strategy in the following lines.

Accelerates innovation

Modern business models are complex with constantly evolving customer requirements and other challenges. IT leaders have to create environments that can adapt to increasing data and workloads and make them user-friendly to employees, customers, and partners, and connect them meaningfully. A good hybrid cloud strategy breaks down silos and efficiently manages multiple clouds, on-premise data centers, and other environments. It integrates and connects across IT environments and leverage data wherever it is driving innovation.

Reduction in cost

Businesses can opt to move specific applications across different cloud environments or into the on-prem data centers, based on the organization’s requirements.  With this, there will be optimization of cloud resources thereby bringing down the capital and operating costs. This flexible and scalable solution can also enhance performance and increase productivity. The gap between legacy and modern systems can be bridged and without replacing the old system, organizations can further reduce costs.

Vendor lock-In gets eliminated

Organizations can avoid being locked into a single vendor or cloud service provider as there is no dependency on a single provider. The freedom to choose the best provider for the best features and services for a specific workload is possible. Portability between multiple providers is provided too.  Business continuity plans will not be hampered as there is no worry of vendor lock-in, thereby contributing to the organization’s competitive edge.

Increase in cloud agility & resilience

To stay competitive, organizations should have the capability to continuously deliver innovative solutions and services to the market. For being agile, enterprises should provide developers with the right skill sets in advanced technologies.  To achieve this, businesses must be willing to embrace more than a single cloud solution which is the best fit.  By adopting a hybrid multi-cloud strategy, organizations can choose and change cloud providers based on business value criteria and capabilities.  This enhances cloud agility and resilience and enables organizations to stay ahead of the curve.

More security across clouds

By leveraging additional cloud computing services, data and applications are better safeguarded than on a single cloud approach.  All ongoing business functions can be run on one or more public cloud services. In the case of classified information or sensitive data, it can be stored in a private cloud or on-prem data center. With this, cyber-attacks and data breaches can be reduced to a great extent.  Furthermore, with the workload being distributed across multiple cloud providers, cyber risks can be easily mitigated.

A hybrid multi-cloud strategy is here to stay and has become mainstream with more businesses adopting it. This approach has several benefits in relation to performance, governance, compliance, scaling, security, and faster time-to-market. It is however crucial to implement the most appropriate solution by factoring in budget, managed cloud services, cloud migration, and change management services. Organizations of all sizes can adopt a hybrid multi-cloud approach to achieve the desired business outcomes.

Written by, Rahul Kurkure, Director (Cloud.in)

Tuesday, 25 April 2023

How to start your AWS Training & Certification journey?



Starting your AWS (Amazon Web Services) Training & Certification journey can seem daunting at first, but with the right approach, it can be a rewarding and fulfilling experience. Here are some steps you can take to get started:


  1. Identify your learning objectives: Before starting your AWS training, you need to identify your learning objectives. This includes understanding what AWS services you want to work with and what level of expertise you want to achieve. This will help you to choose the right training path and certification that best suits your needs.


Solution options available for learning are - Advanced Networking, Data Analytics, Databases, AWS for Games, Machine Learning, Media Services, Security, Serverless, Storage


Expertise levels available are - Foundational, Associate, Professional & Specialty 


  1. Choose your training path: AWS offers different training paths, including cloud practitioners, architect, developer, operations, and specialty. Each path has a different level of difficulty and requires a different set of skills. Choose the path that aligns with your learning objectives.


  1. Enroll in training courses: AWS offers various training courses, including classroom training, online training, and virtual training. You can choose the one that best suits your schedule and budget. AWS training courses are designed to provide you with the knowledge and skills you need to work with AWS services.


AWS Skill Builder is an online learning center where you can learn from AWS experts and build cloud skills online.


  1. Practice with AWS services: Once you have completed your training, you can start practicing with AWS services. AWS offers a free tier that allows you to use its services for free for a limited period. You can use this opportunity to gain practical experience with AWS services.


  1. Get certified: AWS offers various certifications that validate your skills and expertise in working with AWS services. 


Below are the certifications available in AWS

  • Foundational Level - Cloud Practitioner
  • Associate Level - Solutions Architect, Developer, SysOps Administrator
  • Professional Level - Solution Architect Professional, DevOps Engineer Professional
  • Specialty Level - Advanced Networking, Data Analytics, Database, Machine Learning, Security, SAP on AWS

Choose the certification that aligns with your learning objectives and validates your skills and knowledge.


In conclusion, starting your AWS Training & Certification journey requires you to identify your learning objectives, choose your training path, enroll in training courses, practice with AWS services, and get certified. With the right approach and dedication, you can achieve your learning objectives and become an AWS expert.


Written by, Suvarna Jadhav, Manager - Strategic Partnership & Alliances (Cloud.in)


Monday, 24 April 2023

Amazon launches generative AI play in AWS Bedrock

 

Amazon says AWS Bedrock will provide access to multiple foundation AI models for enterprise-scale AI applications.

Amazon is the latest hyperscaler to take on the world of foundation AI including generative and large language models. It has launched a new platform called AWS Bedrock that includes access to in-house tools such as the Titan family of foundation models, and pre-trained models from start-ups like AI21 Labs, Anthropic and Stability AI. The company says the focus is on providing a range of models for use in “enterprise-scale” AI tools. One expert said Amazon has “a long way to go” to catch up with other players in the field.

Opening AWS up as a marketplace for multiple AI models mirrors moves by Google to offer those made by third parties in Google Cloud alongside its own PaLM, including from Midjourney and AI21 Labs. Microsoft has gone “all in” with OpenAI through its Azure cloud, offering GPT-4, ChatGPT and other models for customers.

Amazon says it will allow companies to train chatbots and AI tools on their own proprietary data without having to invest in costly data centres and expensive AI chips. AWS will use a combination of its own custom AI chips and those from Nvidia. “We’re able to land hundreds of thousands of these chips, as we need them,” explained Dave Brown, VP of Elastic Compute Cloud at AWS.

The launch of Bedrock has been in the works for the past few months, with AWS signing partnership agreements with Stability AI and other start-ups, as well as investing more in generative AI apps and its underlying technology. Hugging Face has also worked to bring its library of text-generating models onto AWS and Amazon has launched an AI accelerator for startups.

AWS is the largest hyperscaler in the world but is facing increasing competition from Google Cloud, Microsoft Azure and others, largely off the back of their AI offerings. Both companies have invested heavily in general AI tools including in chatbots such as ChatGPT and Google Bard.

Amazon hasn’t unveiled the pricing for its AI offerings yet and full details aren’t clear but users will be able to tap into the various foundation models via an API. It is focused on “enterprise-scale” apps rather than individual tools as it is designed for scale.

Multiple AI models

AI21 Labs Jurassic-2 family of foundation models are particularly suited to generating multilingual text, while Anthropic’s Claud is good for text-processing and conversational tools. Stability AI brings text-to-image tools to Bedrock including Stable Diffusion which can be used for images, art, logos and graphic design. The most recent version of Stable Diffusion has improved text accuracy and clarity. Using Bedrock, developers will be able to create tools that combine models.

Amazon’s own Titan models include text and embedding. This allows for text generation like writing a blog post or a sales pitch, where embedding can translate text into numerical representations to find the semantic meaning of the text.

Any of the models can then be further trained on labelled datasets stored in S3, Amazon’s cloud storage tool. Only 20 well-labelled pieces of data is required to make the model work against the proprietary information and none of that data will be used to train the underlying models, according to Amazon.

“At Amazon, we believe AI and ML are among the most transformational technologies of our time, capable of tackling some of humanity’s most challenging problems. That is why, for the last 25 years, Amazon has invested heavily in the development of AI and ML, infusing these capabilities into every business unit,” the company said in a statement.

In the same statement Amazon highlighted the use of chips to bring down the cost of running generative AI workloads, explaining that these ultra-large models require massive compute power to run in production and so AWS Inferentia chips can be used to make this more efficient and reduce cost at enterprise scale.

AWS Bedrock has ‘a lot of catching up to do’

The company is also opening up its answer to Microsoft’s GitHub Copilot, a tool widely used by developers to help write code. Amazon is making CodeWhisperer available for free for individual developers. It is an AI-powered coding companion that can offer code suggestions based on previously written code or comments. There are no usage limits for the free version, but a paid tier, for professional use, also includes enterprise security and admin capabilities.

Daniel Stodolsky, former Google Cloud VP and current SVP of Cloud at SambaNova said the old cloud argument of bringing compute to your data doesn’t stack up in the new world of generative AI. “Whereas other cloud services such as predictive analytics rely on huge volumes of real-time data, Amazon says the process of customising its pre-trained LLM can be completed with as few as 20 labelled data examples,” he said.

“The trend for generative AI will be towards open best-of-breed approaches rather than vendor lock-in and closed models. It’s much better to own a large language model that’s built and fine-tuned for your use-case rather than relying on an off-the-shelf model with minimal customisation.

“The other consideration is getting value from generative AI quickly. Amazon’s Bedrock service is only in limited preview right now and anyone looking at AWS Service Terms will find that Service Level Agreements don’t apply – in other words, it’s not production ready and won’t be for some time. Generative AI is a race that’s already well underway, and Amazon clearly has a lot of catching up to do with other production-ready platforms.”

Courtesy: https://techmonitor.ai/



Thursday, 20 April 2023

A Google Cloud Platform strategy that delivers? The 4 key steps to success

 


The cloud allows you to scale to a level of computing, networking and storage that you could not otherwise achieve. It enables continuous innovation and ease of collaboration. But to achieve these benefits, it’s important to design your cloud solution with high availability, security and governance. In this blog, we explain how to achieve this with Google Cloud Platform.

While there may be specific circumstances in which it is advisable to opt for a hybrid cloud strategy, expert advice is increasingly focused on moving to public cloud. There are several reasons for pursuing a public cloud strategy including:

CapEx vs. OpEx. Cloud computing moves organizations from a CapEx (capital expenditure) model to an OpEx (operating expense) model, with OpEx offering low or no upfront costs, and tax deductible benefits. It also provides lower risk and exit costs.

Lower TCO (total cost of ownership). You save money as you don’t need to build out your own data center with all the ongoing associated maintenance and running costs.

Pay-as-you-go philosophy. Only pay for what you use, unlike an on-premise data center that requires overprovisioning, which means you pay for computing resources regardless of whether you use them.

Flexibility and agility. Public cloud makes it far easier to adapt your IT projects as needed, with almost immediate provisioning and the ability to scale up or down rapidly. Over and above these, CIOs want a solution that provides security and governance. To achieve this, it’s best to follow a proven methodology and to go slowly.

Designing a step-by-step Google Cloud strategy that delivers

At SoftwareOne, we follow a well-practiced step-by-step process to ensure all the benefits and requirements of a Google Cloud project are met.

Discover. We start by understanding what a customer’s business outcomes are and then establish the scope of the IT project to achieve these outcomes. We analyze the client's services and/or applications, evaluating their cloud maturity and various scenarios available to them. In this phase we carry out data collection, we interview the personnel involved (both IT and business), and then identify and group the infrastructures by dependencies.

Execution planning. Here we define the design and architecture model, following the standard and proven Google Cloud methodology. We determine what needs to move and how this can be moved. This could include computing, networking, storage, databases, serverless scenarios or containers. We prepare the landing zone, clarifying the aspects of identity management, networks, security and billing. And we prepare for the kickoff, specifying tasks and workflows.

Project start. We subdivide the project into the phases of adoption, migration, transformation and obtaining results. The plan designed in the previous phases is executed, always iteratively, and decisions and improvements are made based on the results of the ongoing analysis.

Optimize. Here we monitor and operate the migrated services and infrastructures, providing centralized management. We give advice on continuous improvement, through the study of new services and/or products, as well as advice for cost optimization and control of resources and billing.

What projects can be tackled with Google Cloud Platform?

Examples of projects we have done with customers are:
  • Workload migration
  • Transformation and modernization
  • Workstations virtualization
  • Datacenter extension
  • DRaaS, backup and continuity management
  • Microsoft Active Directory Managed Service
  • Remote and easy access for internal web applications with BeyondCorp
  • Virtualization based on VMware


For each scenario we apply a decision model that, after discovery and planning, consists of deciding whether to migrate (rehost or replatform) or change (refactor or rebuild). Then we enter a continuous cycle of improvement and optimization.

To undertake each project with customers, we have total flexibility to adapt to their needs. We provide turnkey projects, where scope, time and price are agreed up front; an agile model with an agreed bag of hours for development and maintenance; a baseline approach in which we provide skilled resources, consulting or training; or a combination of any of these depending on requirements.

Courtesy: https://www.softwareone.com/

Wednesday, 19 April 2023

How can businesses get the most value from AI?

As various high-profile fiascos have demonstrated, getting your data house in order, building guardrails and winning trust are key to effective artificial intelligence deployments


When top OpenAI investor Microsoft unleashed a ChatGPT-infused Bing search on the world, it wasn’t long before it ran haywire, comparing journalists to Hitler and gaslighting its users. Of course, these deranged tirades were not really an AI going rogue or anything of that sci-fi ilk; the tool is a probabilistic program that, having scraped the internet and all the junk on it as its source, returns answers that it thinks are likely to be correct. The whole episode did, however, highlight the need for a considered approach to AI deployments, especially when they’re public-facing. Above all, it demonstrated that AI needs precise use cases informed by good, up-to-date data, and guardrails to ensure it’s on the right track.

“Microsoft, Bing, OpenAI and ChatGPT have done the world a favour,” comments EMEA field CTO at Databricks, Dael Williamson, “because on the one hand, they’ve shown us the art of the possible – but they’ve also shown us the respect we have to give to training data.”

As amusing as the headline-grabbing antics of abusive chatbots might be, what will really be front of mind for most businesses as they seek to leverage artificial intelligence is how it can help them work smarter and more efficiently. For example, Williamson saw the power of AI in his previous career in proteomics, with simulations for drug discovery that used to take 25 days now taking just a few hours. And across all kinds of industries, businesses are using AI in ways that might not make headlines but are helping them provide better solutions and services. Whether we’re aware of it or not, many of us interact with AI on a daily basis – from the navigation tools that plot courses for Uber to Amazon’s recommendation engines.

“It all starts with data,” says Williamson. “Before businesses can create AI models that actually deliver value, they need to ensure the source data they’re building from is accurate, complete, timely and fair.” 

While the transformational potential of AI really is enormous, and may change the world in unforeseen ways, most businesses will be seeking to use AI to improve their business processes. Decision-makers have certainly noted the potential. In a recent MIT and Databricks technology review survey, CIOs estimated that AI spending over the next three years will increase in security by 101%, data governance by 85% and new data and AI platforms by 69%. To ensure that it’s AI driving the efficiencies rather than a tail wagging the dog situation where the technology is in search of a problem, businesses will need to first identify the use cases that would actually benefit from these rollouts and, crucially, ensure their data is in order.

Artificial intelligence is only as good as the data that feeds it. Unfortunately for weary data scientists, who spend an astonishing 80% of their time searching for the stuff, most organisations are sitting on incredible treasure troves of data, but it’s scattered and hard to find. This is unsurprisingly a barrier to using it effectively, let alone for building effective AI models.

If not hidden down the proverbial sofa, this data is siloed, disconnected and sorted in different databases and formats. In short, staff in department A may not know about the data in department B, and even if they do, they’d struggle to connect it. To get around this, businesses need to unify their data environment. “We call it the ‘lakehouse’ concept – think of it as the production and distribution of data and models,” says Williamson of this open architecture proposal, “where it covers all the value units you’d typically want to have your data go through.”

By unifying all of your business data and applying governance to it, the data becomes much more observable, making it easier to maintain and manage data integrity. With this data organised, accessible and standardised, businesses can pick and choose which data sets are the most appropriate for the model they’re building, whether that’s large language models, computational models, deep or machine learning, and then build the applications on top of that.

“That’s the technology, but the hard bit is change management and trust,” says Williamson. No wonder; those aforementioned fearful headlines often frame artificial intelligence as a uniquely disruptive force that’s set to play havoc with society as we know it, shredding the social contract and discarding its hapless victims. That’s not the case at all – most businesses will simply be attempting to drive efficiencies, using automation to sluice away the most dreary manual tasks, which often don’t scale without a little technological assistance.

Take the humble elevator, for example, notes Williamson. For many years, lifts were staffed by an attendant, greeting users and pulling the levers. It took a long while before people trusted these newfangled automated contraptions enough to press a button, but now it’s as intuitive as crossing the road. Change can take time, and that’s why it’s so vital organisations manage it carefully, rolling out AI deployments with openness and transparency. At the very least, they should work with technology that operates a sort of “glass box” model – as opposed to an opaque “black box” with all the inner workings hidden away – so that users understand exactly what is going on and why.

“If you translate it to people, process and technology, technology needs to be simplified and made uncomplicated, while process is the real ‘unlock’ to create efficiency, build trust and transparency through that,” says Williamson.

Today, it’s really only the dawn of the AI era, but soon enough it’ll become evident that people will largely interact with machines as co-pilots, much the same as other transformative technologies like the printing press and the internet. Communicating this to users is key: “We need transparency, open data and trust,” Williamson says, with projects that demonstrate their value to staff outside of data science functions. “The few enable the many – that’s the bottom-up way of thinking about it. There also has to be a top-down commitment from the C-suite and all business leaders to work together; a partnership between those two groups, where everyone is rowing in the same direction.”

Courtesy: https://www.raconteur.net/



Tuesday, 18 April 2023

Effective Client Management




Client management is a critical aspect of running a successful business. It involves building and maintaining positive relationships with clients to ensure they are satisfied with your products or services. Good customer management can lead to increased revenue, repeat business, and referrals.


Here are some key strategies for effective Client management :


1. Establish Clear Communication Channels :

Effective communication is key to successful customer management. You need to establish clear channels of communication with your customers to ensure that they can reach you easily whenever they need. This could be via email, phone, or even social media.

Make sure to respond to your customer inquiries promptly and professionally. Be sure to keep them updated on any changes to their orders, projects, or services. This will help build trust and establish a positive relationship with your customers.


2. Build Strong Relationships :
 
Building strong relationships with your customers is essential for good customer management. You need to understand their needs and requirements, and be able to provide them with personalized solutions that meet their specific needs.

Make an effort to get to know your customers on a personal level. Remember their names and
birthdays, and send them personalized emails or cards on special occasions. This will help you
build a connection with them, and make them feel valued.


3. Be Responsive :

Customers appreciate responsiveness. You need to be available to your customers whenever they need you. This means responding to their emails or calls promptly, and providing them with the information or assistance they need as quickly as possible.

If you are not available to respond to their inquiries immediately, make sure to acknowledge their
message and let them know when you will be able to get back to them.


4. Set Realistic Expectations :

Setting realistic expectations is key to good customer management. You need to be transparent with your customers about what you can and cannot do. This will help you avoid misunderstandings or unmet expectations down the line.

Make sure to provide your customers with a clear timeline for delivery or completion of their projects or services. Be realistic about the amount of time and resources required to complete the work, and make sure your customers understand the process.


5. Provide Excellent Customer Service :

Providing excellent customer services is critical to good customer management. You need to make sure that your customers are satisfied with your products or services, and that their needs are being met. Make an effort to go above and beyond for your customers. Provide them with personalized solutions, offer discounts or special promotions, and be willing to make exceptions for special circumstances.


In conclusion, good customer management is essential for the success of any business. By establishing clear communications channels, building strong relationships, being responsive, setting, setting realistic expectations, and providing excellent customer service, you can build a positive reputation and ensure the satisfaction of your customers.


Written by, Siddhi Shinde (Project Management Officer, Cloud.in)



Monday, 17 April 2023

Everything is moving to the cloud. But how green is it, really?

Our everyday tasks are increasingly digital, supported by tools and services that are based on some remote server farm. How do we assess the carbon footprint left by data centers?


It's hard to function in modern life without the 'cloud'. Our everyday tasks are increasingly digital, supported by tools and services that are based on some remote server farm. The cloud, after all, is just someone else's computer (or server).

Now it's certainly the case that the cloud helps enable a fairly low-carbon footprint, allowing people to accomplish a lot without burning fuel to get anywhere, like working from home or navigating more efficiently to avoid traffic jams. At the same time, it's easy to forget that the cloud has its own carbon footprint, left by data centers buzzing with digital activity. 

"At the end of the day, the internet is running on data centers, and from an operational perspective, the data centers are running on energy," Maud Texier, Google's head of clean energy and carbon development, tells ZDNET. "So, this is the primary source of greenhouse gas emissions -- when someone is using the cloud, is typing an email and creating something new."

Also: What is cloud computing? Everything you need to know

Before attempting to determine how green the cloud is, it's worth revisiting just what exactly the 'cloud' is. This somewhat cryptic tech term simply refers to computing services delivered over the internet. That definition covers everything from applications like Instagram or Google Search to foundational computing services like processing power and data storage. Companies can decide to manage their digital operations on their own servers (typically in an on-premises data center) or via a cloud provider like Google Cloud, Amazon Web Services or Microsoft Azure. 

More data doesn't equal more energy consumption

Given the way the digital economy has exploded over the past two decades, it'd be easy to assume that the cloud's carbon footprint has also spiked. Luckily, that's not the case. 

Research published in 2020 found that the computing output of data centers increased 550% between 2010 and 2018. However, energy consumption from those data centers grew just 6%. As of 2018, data centers consumed about 1% of the world's electricity output. 

The tech industry has managed to keep its energy consumption requirements in check by making huge energy efficiency improvements, as well as taking a range of other strategic moves. 

Cloud vs data centers

Cloud migration has been huge -- the share of corporate data in the cloud jumped from 30% in 2015 to 60% in 2022.

But mostly organizations aren't moving to make their operations more sustainable, notes Miguel Angel Borrega, research director for Gartner's infrastructure cloud strategies team. 

"There are other variables that are even more important than sustainability," he says to ZDNET -- such as cost savings or the ability to leverage the latest technologies from cutting-edge innovators like Google and Microsoft. That said, sustainability ends up as a clear benefit as well.  

"When we compare gas emissions, energy efficiency, water efficiency, and the way they efficiently use IT infrastructure, we realize that it's better to go to the cloud," Borrega says. 

One major reason service providers could run more efficiently, he says, is simply that their infrastructure is newer and more efficient. Many existing corporate data centers are 30 or 40 years old, meaning they aren't taking advantage of more recent gains in energy efficiency.

Renewable energy

One of the main drivers for reducing greenhouse gas emissions is using renewable sources of energy. Traditional data centers are normally powered with energy from fossil fuel sources, but new cloud regions are increasingly tapping renewables. 

In cases where they can't use renewables, cloud companies are now often committed to compensating for their energy use with zero-carbon energy purchases, or carbon credits -- effectively investing in future carbon-free uses. For example, Microsoft has pledged to have 100% of its electricity consumption matched by zero-carbon energy purchases by 2030. 

microsoft's green energy goals
Source: Microsoft

"Like other users, our datacenters and our offices around the world simply plug into the local grid, consuming energy from a vast pool of electrons generated from near and far, from a wide variety of sources," Microsoft executives wrote at the time. "So while we can't control how our energy is made, we can influence the way that we purchase our energy."

Amazon, meanwhile, says it's on trajectory to power all of its operations with 100% renewable energy by 2025. That includes Amazon's operations facilities, corporate offices, physical stores as well as Amazon Web Services (AWS) data centers. It says it's committed to reaching net-zero carbon across its operations by 2040. 

Google started its cloud sustainability efforts in 2007 by purchasing high-quality carbon credits. In 2010, it began finding clean energy sources and adding clean energy to the grid to compensate for its consumption. And since 2017, the company has been buying enough renewable energy to match its consumption. 

timeline of google's green energy efforts
Source: Google

In 2020, Google began tracking a new metric, the carbon-free energy percentage (CFE%). This metric represents the average percentage of carbon-free energy consumed in a particular location on an hourly basis, while taking into account the carbon-free energy that Google has added to the grid in that particular location. So for businesses, the CFE% represents the average percentage of time their applications will be running on carbon-free energy.

Google also set a goal in 2020 to match its energy consumption with carbon-free energy (CFE), every hour and in every region by 2030. As of last year, Texier says about two-thirds of Google's energy consumption relied on CFE. 

example chart of carbon-free energy supply
Source: Google

"There's still more work to do," she says. "It's going to be much more regional -- how do we talk with regional stakeholders and utilities as they try to change the grids?"

Where is your cloud running? 

Location is an important aspect to consider for anyone trying to assess just how 'green' a specific cloud is, as Trexier suggests. Some of Google's data centers, in places such as Finland, Toronto and Iowa, have a CFE% above 90. Others, such as data centers in Singapore, Jakarta and South Carolina, are closer to 10% or 20%. 

"This is one of the biggest realizations that came to us when we switched from this global annual goal to this small, more specific, 24/7 goal," she says. "That actually, there's a very large variability within the portfolio. And we have to be much more surgical in terms of the roadmap for each data center."

In places such as the Asia Pacific region, Texier says the barriers to greater renewables adoption are often geographical -- there's just not a lot of space to create renewable energy. Instead, energy providers have to build "islanded grids" that provide energy from sources such as offshore wind, which is more expensive and built on newer technologies. 

Meanwhile, in places like the US South, Texier says there are fewer options for energy customers like Google to purchase green energy. 

"Big picture right now, there's a lot of demand for renewable energy, not just from Google, but from a lot of corporations," she says.

"It's really been a booming market, which on one side is is really helpful to accelerate the deployment of more renewable energy. On the other side, what we are realizing now is that the needs of deployment of clean energy and renewable energy cannot be met with the current processes that we have."

Getting more efficient

While cloud providers work with the energy sector and regulators to create more renewable energy options, they're also getting more efficient at running their operations. A data center requires a great deal of power to run workloads, maintain data storage, run cooling systems, distribute energy, and so forth. With advances in areas such as refrigeration and cooling systems, cloud providers can dedicate more energy to providing computing power. 

At the same time, cloud providers can offer efficient server utilization. 

"Imagine you have a server that can support 100 workloads," Borrega says. "Normally what we see is that to run this basic volume of workloads, on average [data centers] use only 40% of their computing resources. But we are powering it with all the energy to support this potential functionality. So in data centers, normally IT infrastructure is used on average at 40%. When we move to cloud providers, the rate of efficiency using servers is 85%. So with the same energy, we are managing double or more than double the workloads."

Meanwhile, cloud providers are running workloads more efficiently as they design new technologies. AWS, Google and others are building their own custom chips and hardware to give customers the most computing power while using the least possible energy.

Courtesy: https://www.zdnet.com/


Cross-Account Access Demystified: IAM Roles, External IDs, and AssumeRole Done Right

Secure cross-account access becomes essential as businesses grow their AWS infrastructure. The ability to correctly configure cross-account ...