Pages

Powered by Blogger.

Enterprise vs. commodity class data center strategies

The virtualized and multi-tenanted data center is at the heart of cloud computing and every cloud-based service. All the applications and services we retrieve or consume in the Cloud, eventually reside in some data center – presumably built according to cloud criteria, i.e. loosely-coupled, shared, virtualized resources, auto-provisioning, auto-scaling, elasticity and so on.
The primary difference between traditional corporate data center and cloud computing data center is in scalability and elasticity. The same applies to the difference between hosted services and cloud services, i.e. the pay-per-use model and rapid, automatic scaling up or down of resources along with workload migration.

Many new and aspiring cloud service providers, especially in terms of IaaS and PaaS services, are looking for ways to make their services more economical and competitive. This obviously goes down to a data center level. Telcos, for example, as I outlined in an earlier post, have certain capabilities and strengths they can leverage upon to provide enterprise-grade virtual private cloud services, emphasizing network reliability, minimizing latency, providing SLAs, and so on. Traditionally, they have implemented carrier-class data centers with enterprise level infrastructure, providing maximum resiliency and tolerance with up-time guarantees of more than 99.9%. The problem? Not all enterprises are willing to accept the higher price-tag as a result of expensive technology and equipment infrastructure in the provider’s data center. For many, looking for a less costly public cloud provider would seem a lucrative alternative, especially if security issues can be resolved. Too expensive and high-grade infrastructure potentially either sets the service provider off in terms of the competition or delivers unsustainable low profit margins. Either way, the cloud services will not  provide the long-term profitability and sustainability level expected by the service provider.

Compare this scenario to Google’s data center strategy, for example. Google has determined that fault tolerance is too expensive to fully maintain on a hardware level. Instead Google, and indeed many other public cloud providers, use the cheapest, but still reliable, parts available and lives with failures as they occur. To address the tolerance issue properly it is increasingly being implemented in software. In fact, Google, Yahoo! and many other cloud providers have adopted Internet principles in their data center designs, using inexpensive commodity components and identical computers that, together with automatic fail-over mechanism, ensure sufficient tolerance and reliability. Although some parts will eventually fail, there are plenty of others available to take over the tasks of the failed component. Even on a internal networking level, Google prefers to use lower speed Ethernet adapters and switches instead of the much more capable Infiniband 40Gbits networking technology. The reason? They save hundreds of dollars per server by using low-cost fabrics from commodity Ethernet switches. Today, Google, Facebook and others either build the servers for their data centers themselves or have them completely custom-made by, e.g. by vendors like Dell. These are x86 type servers that are made of commodity parts and stripped of every feature that is not necessary. To further save money, Google even puts in a built-in power-supply into each server instead of running a central UPS system, a practice that was quite unheard of in the data center business.

Moreover, Google, Amazon and some others can take extra advantage of running massive public cloud data centers where economies of scale provide many distinct advantages, such as lower administration costs per unit and, usually, lower energy consumption per unit as well. Another is data processing capabilities. Consider for instance the MapReduce framework, originally popularized by Google, that is used for processing huge datasets using a large number of commodity servers in a cluster arrangement. A large server farm can use MapReduce to sort a petabyte of data in only a few hours. Although various MapReduce versions have been implemented by many other organizations, it takes a significant amount of technical efforts.

So then the question remains – what can smaller cloud providers, like many telcos, do to even come close to offer services that can cost effectively compete with the leading public cloud providers like Amazon? The short answer would be that it’s impossible. Then again, there are several providers, like Korea’s KT, that have started to take advantage of similar principles as the big public cloud providers for providing cloud services to their local markets. From a recent news story in InformationWeek;
KT’s approach to cloud computing is bold,” said Randy Bias, CEO and founder of Cloudscaling. “Modeling their cloud computing architecture after the most efficient and lowest-cost public cloud providers should allow them to leapfrog regional competitors who are building clouds based on enterprise architectures.
It’s not unlikely that many other upcoming cloud service providers will adopt a similar route as KT, hoping for better economic gains and a stronger competitive position.

0 comments:

Post a Comment