The amount of data you need to handle is only going to increase. Plan for it and act accordingly
Size does matter in this case. Every enterprise needs to
decide for a judicious use of their storage capacity. They should know how much
more do they need to store all kinds of data that is being generated or used in
the organization. That's where capacity planning pitches in. Capacity planning
is about calculating in advance (and making an educated guess) about how much
storage you would need in an enterprise like it's not just all files that's in
the storage, but there are lot of useless user files (photos, MP3, downloads.),
e-mail, IM records and so on.
Options too many
If I had a nickel for every storage management software product, I could be
filthy rich. That's precisely how many such products are available out there
in the market. But more than this, what's important is knowing what you need
and ignoring the rest.Another big pain point for users is the need for better
management tools, an underlying pain point is the allocated versus utilized
(capacity) issue. Users need to better determine what's being utilized vs.
what's been allocated so that they can re-allocate unused capacity.
While the storage industry continues to debate the issue of
where advanced storage services should reside, vendors of fabric switches and
virtualization software are standardizing application interfaces for
network-based storage applications.
A step in this regard is the FAIS (Fabric Application
Interface Standard)--a multi-vendor initiative that is defining new and more
intimate relationships between the network infrastructure and storage
applications. It represents a major step in the development of SAN technology as
it evolves from simple connectivity to intelligence. FAIS facilitates the
migration of storage virtualization and other enhanced services from edge
devices such as hosts and storage arrays to the network core.
Combining storage and networking into a SAN creates new
capabilities that are far more diverse than the sum of its parts. Traditional
network management, for example, focuses on data transport between source and
destination. Address assignment, device configuration, bandwidth allocation,
routing protocols, traffic monitoring, and historical reporting for capacity
planning may be incorporated into a network management application to ensure
proper data transport through a network infrastructure. Conventional storage
management may center on allocation of storage resources via LUN assignment,
RAID levels, storage utilization, and backup scheduling.
Advanced Storage Area Networks (SANs) provide one of the
best approaches for addressing the explosion of data and its management. SANs
help enable storage consolidation and deliver higher availability of critical
enterprise data and applications. Furthermore, SANs facilitate improved storage
resource utilization and more effective storage management.
The next wave
Storage automation is being dubbed as the 'next big wave' in storage
management. This is because the storage management networks have become so
heterogeneous and complex to manage, that automating them will soon become a
necessity. You should also go for cooperative data classification. The aim here
is to quantify the value of different datasets, which in turn would determine
the class of storage that each dataset requires.
Central to planning is the concept that data changes over
time, in relation to evolving customer demands and business conditions. To
achieve on the similar lines, some enterprises already have shifted
non-essential data to secondary tiers of storage, freeing primary storage
resources for high-value applications.
Tiered storage segments data based on its varying business
value. This helps to control storage costs and simplify data management. A
typical tiered architecture for storage could use a SAN for transactional or
production data. Data of lesser importance can be shifted to secondary arrays
behind the primary storage networks.
So to cut a long story short, the organizations going for
tiered storage must:
-
segment data based on business value
-
realise that data is always evolving over time and so
should the storage and the capacity -
winnow thousands of datasets into a manageable number
-
tell your vendors about specific attributes for all
storage classes -
automate the complete
storage management and set common enterprise-wide policies for the
same
The lucky seven
What you should choose and how you should go about it are the topmost
concerns on a CIO's mind when planning for effective storage capacity. We tell
you how you can do that in just seven simple ways.
Small can only get bigger.
Most organizations make the mistake of starting a mammoth task in on go. And
that's where most of the capacity-planning efforts fail after the initial
hoopla. This is especially true for those who don't have any previous
experience in this area. So start with just a few of the most critical
resources-say, processors or bandwidth-and to gradually expand the program
as you gain more experience.
Speak their language.
Instead of asking your team for predictions about the expected rise in the
resources with the business, it's wise to ask the developers and the end users
what they would know. For instance, one can enquire about the what kind of
resources and how much are used during peak loads, and manage that.
Compatibility/interoperability.
When you are choosing capacity-planning tools, keep in mind new and upcoming
architectures in the market and select packages that can be used today and will
be compatible with ones that come tomorrow. This consideration should extend not
just to servers, but to disk arrays, tape equipment, desktop workstations and
network hardware.
Involve suppliers.
To make your capacity-planning products usable across multiple platforms, share
your plans with your suppliers. When you do this, always accommodate all
logistics and other overheads and reach a consensus. Once they are involved,
they will side by you in getting products that are close to your planned ones.
Plan for the unseen.
Some capacity upgrades are linear, ie doubling the amount of the quantity of one
resource (say, increase in processors, memory, channels, or disk volumes) will
double the cost of the upgrade. But if the upgrade approaches the upper limit
(say, maximum number of cards, chips, or slots that a device can hold) a
relatively modest increase in capacity may end up costing an immodest amount for
additional hardware.
Plan for the occasional.
A forecasted change in workload may not always cause an increase in the capacity
required. Departmental mergers, staff reductions, and productivity gains may
result in some production workloads being reduced. Similarly, development
workloads may decrease as major projects become deployed. While increases in
needed capacity are clearly more likely, reductions are possible. A good
guideline to use when questioning users about future workloads is to emphasize
changes, not just increases.
Sky is the limit.
One of the best ways to continually improve the effectiveness of the
capacity-planning process is to set a goal to expand and improve at least one
part of it with each new version of the plan. Possible enhancements could
include the addition of new platforms, centralized printers, or remote
locations. A new version of the plan should be created at least once a year and
preferably every six months.