« Unemployment Issues | Main | Age and Treachery »

June 18, 2010

Peaking at "The Cloud"

OK, so "Cloud Computing" is a fancy way of saying hosted data storage, servers or applications. Your stuff on someone else's hardware, for a fee. They do the drudge work, you do the thinking and rake in the money, right? Or not, as the case may be, as cloud resources still need to be managed by your own in-house team, but they just have less control and more constraints.

I went to the Web 2. conference a couple months back, and this year was very encouraged to see some of the pure BS hype from last year being dialed back. Plus, there were people actively promoting security and best security practices in the "cloud" space.

I have spent a lot of time trying to figure out what the heck I would really recommend that companies use "cloud" computing resources for. What is actually my idea of the "killer app" for "cloud computing"? It eluded me for a while, until one day I went to that venerated geek haven, Slashdot.

Who in the high tech industry hasn't heard of the "Slashdot Effect", when an article is written about your company or website, and the attention and resulting web traffic it gathers crashes your site? The Slashdot Effect exceeds all planned peak load (if you ever planned for such a thing), and gives systems people gray hair overnight. Thing is, not just Slashdot causes this - as Amazon found out with the Harry Potter releases.

The killer app, then, becomes the ability to handle those peak loads - seamless peaking - without having to have a large number of expensive machines idle 95% of the time. It must be cheaper than provisioning a bunch of machines and leaving them to run; it must come on-line rapidly, and it must be responsive to the demand. We are talking about dynamic peaking of web services.

IMO, this type of application would be a virtualization specialty, with load balancing and network allocation in the mix. Dynamic allocation of compute resources is a must. While the initial set up and configuration would be done under non-load conditions, the actual engagement of the resources would be based on demand, and automated to happen in near-realtime.

Automation is key, here. The Slashdot Effect happens rapidly, often faster than you can page a sysadmin out of bed at three am. Yet it is not so rapid that the traffic chart is a complete right angle. The key here is monitoring, and smart enough monitoring to discern between a DDOS attack and a genuine demand peak. The idea is to bring enough resources on line rapidly enough to respond to the trend without going too far, and then scaling them back when the peak is over (most peak scenarios neglect this.)

The cost-effectiveness of this is obvious. A base fee would be charged for the configuration of the various resources and the ability to "peak" up to a certain level. Above that, only the actual peaking usage brought on line and used would be charged for, by bandwidth, CPU, and disk space metrics. This, of course, is another reason why the ability to automatically de-provision resources has to be part of this.

So there you have it - dynamic peaking - what I see as the basic outline for the killer "cloud" application, something that would let more sysadmin's sleep at night, secure in the knowledge that the cloud would handle that surge in popularity that marketing has been promising "...any day now.".

Posted by ljl at June 18, 2010 6:54 PM

Comments

Post a comment

Thanks for signing in, . Now you can comment. (sign out)

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)


Remember me?