From The Editor | September 9, 2010

Data Center Transformation Part 8: Storage Economics

Data Center Transformation Part 8: Storage Economics

By Hu Yoshida, Hitachi Data Systems

Data Center transformation will not happen if we do not invest in the right technologies. In my previous posts, I have talked about some of these technologies — server virtualization, storage virtualization, dynamic provisioning — and the need to integrate them into a common pool of resources.

Unfortunately, the return on investment for these types of technologies is often hard to communicate to the financial decision makers. This is particularly true in the case of storage technologies. Since the beginning of this industry, the base measurement has been cost per capacity and investments in storage resources have been made on that basis. The financial decision makers can understand lower cost per capacity. They may not understand the benefits or rate of return on technologies like virtualization and dynamic provisioning.

Some operational costs like provisioning and device migration can be deferred by buying more capacity upfront rather than buying the dynamic provisioning and virtualization tools to help you provision and migrate data on the fly. However, deferring operational costs by buying more capacity only creates a larger problem down the road as data increases and more applications and processes get layered on top of this infrastructure. The tendency is to focus on the capital or hardware costs of storage because it is easy to identify, while operational costs are harder to quantify and even harder to justify in terms of investments.

Today, the operational cost associated with powering, provisioning, storing, managing, migrating, protecting, and ensuring 24/7 availability is accelerating at an alarming rate. Many analysts believe that operational costs are more than 75% of the total cost of storage today. If data centers ignore operational costs and invest only on the basis of hardware cost, they are not addressing the real cost problem. The operational costs do not go away, and the operations people will have to make up for this with their blood, sweat, and tears. Projects are delayed, budgets are over run, and the business is left wondering why their IT budgets are increasing faster than their revenue growth even though they are getting a great deal on the hardware.

Why storage economics is needed
Storage Economics is a set of tools and methodologies that was developed by Hitachi Data Systems to address these problems. It was started over eight years ago by David Merrill, who has since developed this into a practice, where we train practitioners in our academy and certify them as Blue Belts and Black Belts. David is our Chief Economist and he blogs regularly on this topic.

Storage Economics starts with a storage assessment to understand how the storage is being used. Then it identifies the total cost of ownership (TCO) including operational and capital costs. Once these costs are identified, we map solutions like virtualization, dynamic provisioning, tiering, archiving, etc. against those costs and show how these technologies can reduce the TCO per TB per year. We also can project a quantifiable return on investment. Since virtualization with Hitachi's USP V/VM is able to enhance the storage assets that it virtualizes, with improved performance and new functions like dynamic provisioning, we also can identify an ROA, or Return on Asset, across all the storage assets that are virtualized. The benefits of these technologies are quantified and communicated in terms that the financial people can understand and measure, like ROA, net present value, rate of return, etc.

This is essentially a business case, but as we all know business cases are like statistics. You can prove anything with statistics, and you can prove anything with a business case. What gives proof to a business case are measurements. Storage economics also involves a project plan, which starts with the business in terms of service level objectives and key performance indicators. Then the technology is scheduled into the plan, along with a process plan based on ITL to manage the change, and a people plan that defines new roles and responsibilities, training requirements, skills augmentation, etc. From this project plan we set checkpoints with quantified targets to measure our progress. These checkpoints and measurements are required to give proof to our business case. Hitachi can provide professional services to take a customer through the whole process, from the initial assessment through the project planning, to benchmarking and measurements. If a data center already has a lot of these skills in place, it may just be a pre-sales effort to identify the total costs, map our technologies against those costs, and quantify the rate of return in terms that the financial people understand. In our storage economics tool kit we have a quick estimator tool that can calculate the economic returns based on some best practices and cost assumptions that the user can plug in. If you would like to know more about Hitachi's Storage Economics check our website, David Merrill's blog or ask your Hitachi sales person.

Data center transformation will not happen if we don't make the right investments. Instead of going for the lowest acquisition cost, data centers must focus on the total cost of ownership, which is increasingly more about operational costs. Our experience has shown that the combination of storage virtualization, dynamic provisioning, and other key features can usually have a 25% TCO reduction impact in about a year. Often we hear decision makers say that they understand technologies like virtualization, but it comes down to the fact that they need to reduce their storage spend. What they are saying is that they have not done the economic study to analyze their total cost and expected rate of return. They haven't done their storage economics.