Network Attached Storage: A Market Overview

What is Network Attached Storage (NAS)?


The Old Server Paradigm
Acceptance of networking as a viable medium for decentralized data and application processing in business has, over the past decade, given rise to ever increasing server responsibilities. As responsibilities have escalated, so too has the cost, complexity and manageability of servers. The result is a danger of coming full circle, back to the confining centralized data and application approach of old so diametrically opposed in today's distributed networks. Distributed networking liberated us from an era in which centralized data and applications ran on mainframes and minicomputers that were expensive to own and maintain. Advances in microprocessor and ASIC technology, coupled with innovative interface topologies and standards initiatives, enabled systems manufacturers to develop and deploy relatively inexpensive general purpose servers with more and more horsepower over time. As a result, servers are capable of performing a myriad of tasks. In the face of this, a pervasive trend toward client centric networking is ushering in a new server paradigm.

In today's network environment, some database and E-mail applications excepted, most users launch and run applications on their clients. As such, severs are quickly becoming massive repositories of shared information, shifting away from their traditional role of applications processing. Herein lies one of most significant challenges facing the next generation of server engines and the microprocessors powering them. Servers today are based on microprocessors designed for application processing. Evidence attesting to this fact is the presence of additional function specific processors in server sub-systems.

Servers, PC based or otherwise, are routinely built using add-in boards with embedded RISC processor engines controlling functions such as Ethernet and/or SCSI interfaces. Even as these embedded engines find their way down onto the motherboard, the principle reason for these inclusions is to unload the application processor from tasks it was not principally designed to perform, such as moving data. Up until now, manufacturing process technology had not advanced sufficiently for a new paradigm to take hold thus forcing the only available alternative, expanding upon the old paradigm by empowering the server with more and more responsibility. By taking advantage of current technology, the possibility now exists to embed most server functions into what JES engineers have defined as a "solution set" (a combination of hardware and software coupled together to perform a particular function or finite set of functions within highly integrated semiconductors).

Issues Driving a New Server Paradigm
Servers are expensive. You thought you got a good deal with that "entry-level" server until you shelled out money for the missing hardware and software. With today's servers you learned the term "plug-and-play" really means "plug-and-pray."

Installing or upgrading traditional server hardware and/or software requires a computer guru that is not included. Maintaining servers requires a separate line item in the budget. When the server is down, nothing gets done. The solution costs 10 times more than the problem and then stays with you like a ball and chain.

Enter the New Server Paradigm
For the same reason the trend toward client-centric networking has given rise to the NetPC initiative (a low cost, function specific, network PC proposal) the trend toward lower cost, function specific servers has begun. The essence of the new paradigm embodies the notion of extracting solution sets from the general-purpose server and moving them to a more optimized environment. This allows for the most advantageous hardware and software architectures to be selected for processing these services and then embedding all the resultant parts into a chip, or chip sets, or an extensible small footprint printed circuit board. The level of integration depends chiefly on available process technology in several areas of fabrication described later.

Key to any intelligent embedded architecture is the processor core selected. For example, the NetPC thin-client reference platform calls for a Pentium processor. Actually, any applications microprocessor could have been selected. In this case, since Microsoft's products are to be used in conjunction with this particular solution set, the Pentium was an obvious choice. Under the new server paradigm, the solution set will be evaluated and may, or may not, use applications microprocessors. For example, moving data, the principle function of a file server, optimally requires an input/output processor tightly coupled with a real-time operating system, not an application processor with a scaled down version of a general-purpose operating system.

Network storage has now reached the point where it must be considered a key service of distributed computing. The accelerating growth of storage capacity is one of the reasons for this new role, but other issues are also driving the change. Today, businesses need fast, reliable access to their data. They need to install and maintain storage easily, with minimum network downtime and minimum administrative overhead. And they often need to access their data from a variety of environments. In the past few years, data storage has changed from an almost transparent issue in network design to a critical issue. Storage today has a major impact on budgets, staffing, network response, availability and many other key issues. The result is that storage has become a network component that must be addressed directly as part of the planning process. Data intensive environments and bloated applications are driving up network storage needs. These applications include imaging, CAD/CAM, graphics, databases, office suites, and a proliferation of data communication programs, such as e-mail. As corporations continue to invest in these storage hungry applications, storage management becomes increasingly critical.

Rapid Growth
Worldwide network storage needs are expected to more than double annually, increasing from 5,203 TB in 1995 to 11,096 TB in 1996. The numbers increase to a projected requirement of 59,661 TB by 1999.

With this continuous and exponential growth, storage problems are inevitable. While hardware cost and administrative overhead are the most visible problems, other problems are beginning to emerge. These include a growing network complexity, lack of reliability, inadequate performance and the potential unmanageability of the entire system.

Separating Storage from The General Purpose Server
The traditional method of adding storage to the network is to add disks to the general-purpose server. Whenever additional space is needed, another hard drive is added until the server is full. At that point, another server must be added to handle increases in storage.

Problems with the General Purpose Server Storage Model
The problems with this method of adding storage are many. Adding multiple drives to the general-purpose server is costly, not only because of the initial investment, but also because of ongoing maintenance and administrative overhead. As more equipment is added, the performance level of the general-purpose server drops due to the increased processing overhead associated with disk I/O.

The general-purpose server can easily become packed with a wide array of server functions. Although there is some benefit because of reduced hardware requirements, the overall results are negative. Each service in a multi-function general-purpose server shares the single CPU and bus. If one service is heavily used, it has the potential to virtually halt all other services on the shared platform. For example, a backup service is easily capable of monopolizing all server resources.

Moreover, reliability is threatened when a general-purpose server is used for multiple functions. Mean time between failure (MTBF) of the general-purpose server is shortened as more processes and hardware components are added. In other words, the frequency of failures increases in direct proportion to the increase in number of components. Another performance issue is data access. Today, most businesses have at least two major operating systems. When multiple network protocols are involved, some data may not be accessible to all workgroups.

Because of the performance and reliability limitations, a multi-function general purpose server is appropriate for small networks only. As the network grows, the multi-function server model is not capable of scaling to the demands of a large system for stable throughput, reliability and easy manageability.

Dedicated File Server Model
Managers have begun to address these problems by dedicating a general-purpose server for file service. The benefits of this approach are that performance and reliability of the general-purpose server are improved. At the same time, running this service on a dedicated machine enhances the performance and reliability of file service itself.

This approach, however, is still lacking in several areas. Although performance and reliability are improved, hardware cost, management and administrative overhead of the file server remain problems. Additionally, the dedicated file server continues to be a protocol-dependent system that restricts accessibility from other network protocols throughout the enterprise.

Direct Network Attach Storage Platform
As we have seen, neither the general purpose server-based storage model nor the dedicated file server model is capable of scaling to meet the escalating demands for data storage. Clearly, data storage needs to be separated from the traditional server platform, and moved onto a platform that has been designed solely for network data storage.

flowchart

A true data storage platform for a network has specific requirements. The first requirement is that the storage platform be scaleable. Scalability means that as the size of the storage platform grows, management, administration and hardware costs do not grow proportionally, but are kept within reasonable limits.

Accessibility is another requirement of network storage. As the inevitable increase in multiple operating systems and multiple users occurs, data access must be assured. Network storage should allow access to users throughout the enterprise regardless of the operating system that they are using or the file server to which they are attached. For example, users of UNIX, NetWare and NT should have access to the same data storage system.

Simplicity is another key requirement. General-purpose servers are inherently designed to do multiple services. The complexity of these systems diminishes their reliability and at the same time raises the cost. A data storage platform should be designed for file serving only in order to decrease the complexity, improve the reliability and lower the cost.

Cost is obviously an important issue in a data storage platform. But it is more than simply an issue of the basic cost of the unit. Cost ultimately determines the flexibility of the storage solution. For example, an expensive storage platform typically requires centralization so that cost can be defrayed over a large population. In distributed computing, data is often best stored in the local environment to improve access times and manageability. Expensive data storage platforms also can limit support storage strategies; for example, sometimes data needs to be stored in multiple locations. For example, managers may want a separate storage system to support a disaster recovery system or for archival purposes. With a large expensive data storage platform, these options simply are not practical.

Building A Cost Model
Analyzing the cost of a network storage system involves looking at several issues. The actual hardware cost is only a small percentage of the overall cost of the storage solution. In addition to hardware cost, total costs include installation, configuration, and ongoing maintenance of the system.

Network impact is another critical consideration in the cost of network storage. For example, network downtime and weekend or off-hours maintenance can seriously add to the total cost-of-ownership of a storage solution. Whenever the network is down, productivity is lost. Also, unproductive workers can't generate business while the network is down, so revenue is lost. Another example of overall storage costs is the sensitivity of the storage solution to changes in the network. If the storage system requires upgrading or reconfiguration with each change in the network operating system, network hardware components, and application software, the real cost-of-ownership is increased.

Adding Network Storage
Network managers have three options for adding on-line, hard disk storage to a network:

  1. Add storage to an existing general purpose server
  2. Add a new general purpose server
  3. Implement a DNA solution
If an available general-purpose server has sufficient unused capacity, a new hard disk drive can be added to that server. This option introduces the following costs:
  1. Hard disk cost
  2. Installation and configuration cost
  3. Network downtime
The cost of a 9 GB hard disk ranges from $1,500 to $3,000 depending on the system requirements and the supplier (sample prices only; prices vary over time). For the sake of simplicity, all of the following examples are based on a 9GB hard disk as the basis for comparison. The time to install and configure a hard disk in an existing server is one to three hours. A typical cost for this level of technical service is $50 per hour, whether the service is provided by in-house personnel or by an outside source.

While the hard disk is being installed in the server, the network may have to be shut down. The cost of network downtime ranges widely depending on the size and activity of the network. The average cost of network downtime for Fortune 1000 companies is estimated to be $3,000 to $4,000 per hour (source: Frost and Sullivan). The cost of network downtime is so high that many network managers use weekends or off-hours to do work that requires shutting down the network. For an increasing number of networks, however, this option doesn't exist because the network is in use around the clock.

Adding a New General Purpose Server
A network manager may decide to install new disk storage by adding a new general purpose server. This would be the case when no other server is available to receive a new disk drive. Or, a manager may decide to dedicate a general-purpose server to file service to increase network performance and reliability.

cost sheet

The cost of a general-purpose server with 9 GB of storage typically ranges from $10,000 to $15,000. The time to install and configure the server varies widely, and may be as little as one hour to as much as two days, even when the work is performed by a skilled network technician. In addition to the new hardware and software costs involved, there are additional costs associated with reconfiguring the system, connecting to SCSI ports, and attaching drivers. The network is often down during installation, and the cost of that downtime involves the same issues as discussed above under Adding Storage to Existing General Purpose Servers. In the case of Microsoft NT and Novell NetWare, a NOS license must be purchased for each server. This can greatly add to the cost of a dedicated network file server.

Sensitivity to Network Changes
The on-going administration costs should also be considered in the cost of storage systems. When a new version of the network operating system is installed, in most cases all general-purpose servers must be upgraded to the new version. This task must usually be done on each server, individually, and typically requires one to three hours per server.

Comparing the Alternatives
The above calculations describe the typical costs of adding network hard disk storage. Adding a 9 GB hard disk drive to an existing general-purpose server has a cost that ranges from $1,050 to $11,050. When storage is added in conjunction with a new general-purpose server, the cost jumps dramatically, from $14,050 to $22,050. Each additional disk (using the 9 GB example) can be purchased for the current cost of off-the-shelf data storage units; for example, a 9 GB hard disk drive sells for approximately $1,500.