What should you do the next time you run out of online disk storage? Buy another array? Maybe. Upgrade to higher capacity drives? Perhaps. But, what if "next time" seems destined to come around every year - or every few months? A "buy more" approach can become prohibitively expensive if applied each time data proliferation threatens to overwhelm available resources.
At Daytona Beach Community College (DBCC) in Daytona Beach, FL, telltale signs inside the data center hinted at a potentially costly struggle to handle expanding storage demands. Piled in the corner were discarded disk drives, victims of the ever-increasing need for higher capacity per disk. "Some of our servers had huge disk arrays attached to them - as many as 15 2 GB drives," explains Mike Burke, director of IS for DBCC. "When we ran out of space on the 2 GB drives, we replaced them with 4 GB drives, which were the largest available at that time. By the time we filled the 4 GB drives, 9 GB drives were available, so we upgraded again."
While DBCC was constantly adding capacity by replacing drives, it was also, ironically, leaving existing disk space underutilized. That's because the institution relied exclusively on server-attached storage in a multiple platform environment. "An application running on a NetWare server would suddenly require additional disk space; however, the only available space would be attached to Windows NT servers," Burke says. "Because the storage was direct attached, it couldn't be moved around. If we needed a new volume for a server on a particular platform, we had to shut the server down and replace the drives."
As DBCC approached the 2 TB threshold, it knew it had to remove the barriers separating available disk space from storage-hungry applications. So, DBCC rolled out a SAN (storage area network). To ensure optimal utilization across the SAN, DBCC implemented SANsymphony, a storage management software package from DataCore Software Corp. (Fort Lauderdale, FL).
Virtualization Drives Utilization
In DBCC's SAN, application servers on the WAN (wide area network) are connected to the Fibre Channel SAN via switches from Brocade (www.brocade.com). Also on the SAN are the storage domain servers running SANsymphony. They provide a virtualization layer for managing the SAN's storage devices, which include disk controllers, disk arrays, and a tape library, all from StorageTek (www.storagetek.com). Because SANsymphony manages storage at the network administration layer rather than the device layer, space on any disk can be made available for any application, regardless of platform.
Now, when DBCC needs to add disks, it can do so without causing server downtime. "We just plug in the disks and tell SANsymphony to configure the arrays," Burke says. "The server picks up the new disk volume without having to reboot." The tools also enable DBCC to quickly recover from server crashes. Says Burke, "Because the disk drives are attached to the SAN, not directly to the servers, the volumes follow the Fibre Channel devices. If a server fails, we can remove the Fibre Channel card from that server, install it in another server that has an available slot, and mount the volume."
To accommodate anticipated growth from 2 TB to 4 TB of storage, DBCC will be turning on advanced management tools that came with a SANsymphony upgrade. Currently, DBCC has to allocate particular storage volumes to particular servers based on predictions. The new tools will allow DBCC to set lower initial allocations and, therefore, optimally use existing disk space. Constantly monitoring utilization thresholds, SANsymphony will automatically reallocate storage as application demands fluctuate. In the event that capacity across the entire storage pool is nearly depleted, the system will alert administrators to add disks.