The trend for most scientific computations to be run on clusters or grids hides the need for improvements in computational design. Using high performance computing (HPC) resources typically involves providing prodigious amounts of compute and memory (and by extension networking). It is inevitable that the demand for storage should increase given the increasing resolution that most scientific tools are inclined to generate though whether the resulting data is efficiently presented or stored can be significantly be improved.
Having an excess of compute and memory is inclined to waste to the extent that computational analyses could be improved. If HPC resources were scarce it would force scientists to be more efficient with available compute runtime and memory.
Another way to think of this is to consider that the rapid expandability available through cloud computing is equivalent to an organisation's ability to instantly increase their inventory when demand increases rather than curate increasing capacity with the same resources. If this happened with physical inventory, what would be the downside?