Recently at Cloud Expo in New York I caught up with Duke Skarda the Vice President, Information Technology and Software Development at The Planet and learned about his company’s Infrastructure as a Service business targeting companies who have hardened websites which need to be up 24×7. He explained how his company believes many customers want a blend of cloud and premise-based solutions.
In the above video Skarda explains a growing part of his business is experimental projects where companies hope for the best but plan for the worst by utilizing cloud-based infrastructure to ensure they have resiliency of an always-on network without the CAPEX of doing it yourself. The ability to quickly scale an application at a fraction of the cost of owning all the equipment yourself means companies can experiment with services in ways not possible in the past.
As a follow-up to this meeting I spent some time with The Planet’s Chairman and CEO Doug Erwin at Interop 2010 in Las Vegas to hear how he put together what he calls the world’s largest hosting company. The Houston-based organization currently has 46,733 servers in eight data centers, 170,000 square feet of data center space and 522 people.
Erwin’s explanation of what makes them different is as follows, “We believe customers should have the power to choose at one-stop shopping. If they want a server today and one tomorrow and hook in cloud storage or move into the cloud itself, we want to be able to offer that broad range of product.” Point being the company is able to provide enhanced services to data center clients such as backup, firewall, virtual racks, security and WAN management.
An interesting part of our conversation is that the cloud can be more expensive as you pay the price for scaling on-demand. He suggests companies run models before they decide which route to take. As an alternative he says companies should just consider adding servers.
A common theme in my conversation with both men was the idea that the cloud means all things to all people. He explains to them it is about shared technology and disbursement – moreover it entails the ability to redirect to any server at the turn of a dime.
The way the company achieves this goal is by keeping the storage in a SAN allowing new servers to be brought up immediately when one fails. Of course this definition of cloud computing may jive with yours or you may consider it to be a subset of elastic computing solutions from companies like Amazon. Either way, the ability to immediately access massive amounts of compute power for relatively low short-term prices will continue to provide abundant opportunities for organizations with champagne adoption goals and beer-sized budgets.