From 2009: Good and Bad Public Cloud Candidates
I recently have a good conversation with Ed and Dean of RightScale on what are the characteristics making an application a good public cloud citizen.
Good Public Cloud Candidates
- There is no warm up and cool down periods required. Newly started instances are immediately ready to work
- Work dispatching is very simple when any instances can do the work
- Cloud computing enable quick deployment of a lot of CPU to work on you problem. But if it requires to load large amount of data before the CPU can start their computation, the latency and bandwidth cost will be increased
- Although cryptography technology is sufficient to protect your data, you don't want to pay the CPU overhead for encrypt/decrypt for every piece of data that you use in the cloud.
- Now you don't need to provision inhouse equipment to cater for the peak load, which sits idle for most of the time.
- The "pay as you go" model save cost because you don't pay when you are not using them
- You don't need to take the risk of over-estimating the popularity of your new application, and buy more equipment than you actually need.
- You don't need to take the risk of under-estimating the popularity and ends up frustrating your customer because you cannot handle their workload.
- You can defer your big step investment and still be able to try out new ideas.
Bad Public Cloud Candidates
Demand special hardware
- Most of the cloud equipment are cheap, general purpose commodities. If you application demands a certain piece of hardware (e.g. a graphic processor for rendering), it will just not work or awfully slow.
- Most of the cloud providers disable multicast traffic in their network
- If there is legal or business requirement demanding your server to run in a physical location and that location is not covered by cloud providers, then you are out of luck.
- Bandwidth cost across cloud boundary is high. So you may endup have a large bill when loading large amount of data into the cloud
- Loading large amount of data also takes time. You need to compare that with the overall time of the processing itself to see if it makes sense
- Legal, liabilities, auditing practices hasn't catched up yet. Companies running their core business app in the cloud will face a lot of legal challenges
- Since you have no control about the location of where the machines are residing, latency is usually increase when you run your app in the cloud residing in a remote location
- If you are using a machine without shutting down, then many cost analysis report shows running the machine inhouse will be cheaper (especially for large enterprise who already have data center setup and a team of system administrators)
I personally believe the real usage pattern of public cloud for large enterprise is to move the fluctuating workload into the public cloud (e.g. Christmas sales for e-commerce site, Newly launched services) but retain most of the steady workload traffic in-house. In fact, I think enterprise is going to move in and out their application constantly based on the change of traffic patterns.
It is much appropriate to do the classification at the component level rather than at the App level. Instead of saying whether the App (as a whole) is suitable or not, we should determine which components of the app should run in the public cloud and which should stay in the data center.
In other words, an Application is running in a hybrid cloud mix, which span across public and private cloud.
The ability to move your application “frictionless” across cloud boundaries and manage the scattered components in a holistic way is key. Once you have this freedom to move, then the price of a particular public cloud provider has less impact on you because you can easily move to any cheaper provider at any time.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)