Secure SAN Storage Sharing With Volume Mapping
Storage and data sharing — long the domain of proprietary mainframe (MVS) and mid-range operating systems such as OpenVMS — have become a goal of open systems Unix, Windows NT, and Linux environments. With more systems being deployed, and applications requiring more storage, the management of these systems and the proliferation of their vendor specific storage have resulted in:
- Dedicated or islands of disk storage
- Under utilized storage capacity on various systems
- Mixed vendors and lack of management tools
- Investment protection and upgradability
- Poor acquisition of storage and allocation
- Replication or duplication of data and storage
- Reduced backup windows with more data to backup
- Disaster recovery and security issues
Figure-1 (Traditional storage environment with dedicated, isolated storage)
Storage Area Networks
SANs (Figure-2) are a new storage paradigm that combines the flexibility and attributes of networking with the performance and robustness of traditional I/O channels. SANs along with Fibre Channel technology hold the promise of addressing the above issues along with others including improved security and data integrity.
Figure-2 (Simple Storage Area Network)
Fibre Channel enables multiple host systems to access and share storage using the SCSI-3 protocol (SCSI_FCP) mapped on to Fibre Channel (100MB/sec, distances up to 10km, and support for 126 nodes plus a fabric port per loop). Fibre Channel provides multiple topologies that can be combined to create a storage network or SAN. These include point-point where a host system attaches directly to a storage device similar to parallel SCSI, public loop using a Hub where one or more host systems attach to one or more storage devices. Some other topologies include private loops where a host attaches to a storage device using a Hub that in turn may be connected to a fabric port on a switch, and switch environments using Fibre Channel switches similar to network switches. It can be said that the good news with Fibre Channel and SANs is that any system can potentially get to, see, and access any storage device. It can also be said then that the bad news is that any system can potentially get to, see, and access any storage device in a SAN.
Shared Storage
Many of today's SCSI and UltraSCSI RAID arrays have the ability to be shared between two or more host systems at the same time over the same or multiple I/O busses (parallel SCSI or Fibre Channel). Using port mapping or LUN mapping techniques as shown in Figure-3, LUNs or storage volumes are mapped from a storage device like a RAID array to a specific port or I/O bus. For host systems with multiple I/O ports and busses, placing specific hosts on specific or unique busses as with parallel SCSI can ensure a certain level of uniqueness. As seen in Figure-3, System-A sees LUN 0 and LUN 1, System-B also sees LUN 0 and LUN 1 on one of its I/O bus interfaces, with LUN 2 and LUN 3 being seen on its other bus interface. System-C also sees LUN 2 and LUN 3, and System-D also sees LUN 2 and LUN 3. In this example if System-A were using LUN 0 and System-B were a Windows NT system, System-B would try to grab LUN 0 and potentially overwrite Systems-A data risking data integrity.
Figure-3 (Port Mapping with parallel SCSI)
In Figure-4 instead of using parallel SCSI, Fibre Channel is being used as an I/O interface. As more host systems are added to a loop or SAN the issue of shared access becomes more complex. Consequently a more robust solution is needed for Fibre Channel SAN environments rather than simple port mapping. Current generation switches have various zoning capabilities for data integrity purposes. New generation of storage devices provide volume mapping or zoning directly from the device (Figure-5) without the need for a switch or special host based or adapter based software.
Figure-4 (Port Mapping with Fibre Channel)
When multiple systems are consolidated to a shared storage interface as in a SAN, there is no longer a point-point connection thus resulting in the need for improved methods to control access to shared storage resources. In the future as Fabric switches mature, improve on interoperability and price, many of these issues will fade away. In the meantime, an alternative mechanism is needed to facilitate storage sharing between homogenous (same), and heterogeneous (different) hosts.
A technique or method for isolating storage is to create virtual zones where specific storage devices (LUNs) are mapped to specific host systems. By mapping storage devices or volumes to a specific host system, a virtual storage network is created similar to a network VLAN where a virtual connection is created between hosts using a network switch. Volume mapping or zoning enables heterogeneous (different) hosts to exist on the same public interface. In a SAN environment a virtual storage interface is created and seen by the host and the storage device as a one to one or point-point connection. Volume Mapping or zoning to date has only been available in Fibre Channel switch or fabric implementations, which are still maturing in terms of functionality and interoperability.
Storage Zones and Pools
Zoning or volume mapping at the storage device or RAID array level provides the ability to map specific LUNs to specific host systems on the same I/O bus (hub, public or private loop, or switch/fabric). Storage system based volume mapping enables specific storage groups or pools to be created and managed by a RAID array in an open host independent manor with no special host software required. This capability enables SANs to move from the drawing board or enhanced SCSI bus to a shared storage environment without waiting for switch decisions to be made. A way of thinking about volume mapping is the ability to set up virtual firewalls between storage devices and host systems on the same RAID array in a SAN environment.
Some benefits of Volume Mapping or Zoning include:
- Support heterogeneous platforms (e.g. Windows NT, Sun, HP)
- Shared resources by placing tape in one zone, and storage in others
- Separate departments, workgroups, functions ü Reduced backup times and improved LAN performance
- Improved disaster recovery capability
- Improved backup performance and improved availability
- Security of applications and data for testing and storage pools
- Position for migration to switch or fabric technology when you are ready
- Reduce total cost of ownership and improve acquisition for storage pools
Zoning or volume mapping can be accomplished in different ways. A quick and easy method of mapping data to specific hosts systems is port mapping where LUNs or storage devices are mapped to specific ports or I/O channels (FC Loop or port). For simple environments, this might be sufficient where only a single or few systems of same type exist on the same port or I/O channel (Figure 2 & 3) as in a public or private loop.
For mixed host environments where two or more different operating systems exist on the same I/O channel, simple port mapping can result in data integrity issues. For example, Windows NT 4.x is known to look for new devices found on its interfaces and will attempt to claim them. If another system (NT or Unix) exists on the same I/O channel and is already accessing the storage device, data integrity can be compromised if the NT system is allowed to access the device thinking it is an unclaimed or unused device with existing data being overwritten. A better approach is needed to isolate systems to their own private loop, port, or zone. Although a zone can be thought of as a concept or virtual connection, zones and volume mapping are implemented in software or hardware.
Switch or Fabric based zoning
Fabric or switch based zoning can be implanted using hard addressing with World Wide Port Name (wwpn), or some other form of addressing with port mapping. Every Fibre Channel device including host bus adapters and storage devices have a unique hardcode address called a World Wide Port Name similar to an Ethernet MAC address found in networks. The wwpn is a vendor unique address and can be used to create hard mappings or addressing between Fibre Channel devices. Port mapping unlike hard addressing using wwpn maps certain storage devices via ports as seen in Figure-3 to various hosts. Port mapping is best used where one host system is attached to a specific switch port and one storage device attached to a switch port. This technique can be used to create a point-point or one to one mapping between a host and storage device as long as no other devices or hosts are on the same interface. Another approach is to use wwpn or a combination of name and directory services within the switch and fabric to create mappings and establish virtual storage networks or interfaces between specific volumes or LUNs at the adapter level.
Hardware Zoning and Volume Mapping
Hardware based zoning or volume mapping can be implemented in different ways with the safest and most open approach utilizing standard wwpn addressing as defined in the FC standard. Proprietary implementations can use various schemas that may be tied to specific hardware platforms or host systems, implementing techniques where host systems register with storage devices to gain access insures vendor and platform interoperability and co-existence. Figure-5 shows an example using storage or device based "hard" volume mapping where a host system registers with a storage device for access to that device using wwpn addressing. In Figure-5 System-A has been mapped to LUN 0 and no other systems in a SAN will be able to access LUN 0 on that specific storage device. This method prevents systems from trying to fool each other or becoming out of synchronization with each other's configurations and compromising data integrity. Thus type of volume mapping is dynamic and can be changed quickly using networked-based interfaces to meet storage needs.
Figure-5 (Hardware or Storage Device based Volume Mapping)
Hardware based zoning or volume mapping where a host registers with storage devices and access is based upon wwpn provides added security and data integration. To prevent unauthorized systems from accessing storage in a safe, non-proprietary manor without using host software or special host adapters, hardware or hard zoning is a preferred method of zoning or volume mapping.
Software or host-based zoning
Software based host bus adapter zoning uses host software to coordinate access with specific storage devices or adapters to restrict access. Some issues with software based solutions include security in that to be effective, each and every host in the SAN must cooperate to run the software and keep each other updated to changes in the configuration and access permission. Should one of the systems be compromised, the entire SAN could be compromised. This approach also requires special software be loaded onto all systems and that all systems support the software which may not be available for some systems resulting in a closed or non-open solution.
Summary
SANs hold the promise for delivering shared storage in an open, non-proprietary environment. Shared storage in a SAN environment requires some method to ensure data integrity and unwanted access from non-authorized systems. Some operating systems like Windows NT 4.x attempt to access newly found storage devices and allocate them for its own use even if the storage is being used by another system. To protect storage and data from unwanted access, zoning or volume mapping insures that only the system or systems authorized to access the device can do so. Systems not authorized for access to specific devices will be denied access to the LUN maintaining data integrity.
As SANs evolve and interoperability issues resolved, fabrics or switches should provide zoning and volume mapping capabilities. In the meantime, zoning or volume mapping at the storage device, or RAID array level enables shared storage pools to be created maintaining data integrity amongst heterogeneous and like systems. Using hardware or storage device based zoning or volume mapping enables secure SANs to be built today without being forced to make a switch or fabric decision until you are ready to do so.
Greg P. Schulz