Cloud Computing – Don’t Try This at Home!

I went to the Eduserv symposium last week – entitled ‘Virtualisation and the Cloud’. It was a well attended and well presented event which had many excellent speakers on a variety of topics concerning the state of cloud computing in the world at the moment. And while there were many interesting points made, two ideas struck me as very interesting.


The first idea was that building a genuine cloud computing solution should be left to the big guys – trying to create a cloud type solution on a small scale is fine as long as you don’t expect to reap the benefits that an industrial provider could give you. The principal of these is the ability to handle ‘burstiness’, i.e. short –term peaks in your system’s computing cycles or storage requirements. In order to handle this on a private cloud, you would have to invest large sums of money to handle a short-term problem – the very thing that the Cloud is meant to help with.


The second idea was that the amount of people using large-scale cloud solutions primarily for storage are being vastly overtaken by the number of people using the Cloud for computing cycles. The high overhead of data transport combined with fears of data security have seen Cloud users happy to have their computing and transaction processing being handled by the Cloud, but less happy to leave their data there.

So what does this mean to a data-centric sector like repository management? The answer might be a combination of using a private data centre whilst at the same time using a public Cloud provider to host new data as it expands past your storage limits. This then gives you time to assess the need for expansion of your own data centre – and proof to the person with the purse strings that you need it!