Advertisment

Manage your: Assets' life cycle

author-image
PCQ Bureau
New Update

So critical are computer systems for an organization's processes that without them the processes can come to a halt. Applications such as e-mail, databases and ERP today run the business and are no longer considered support systems. Terms such as knowledge management, employee productivity, efficiency, flexibility, change management and cost cutting are frequent used in corporate board rooms. What makes all this happen is the enterprises' IT infrastructure. However, while we talk of managing every other thing, we don't talk much of managing the IT infrastructure. This makes us present you this story on managing the life cycle of computer information, software application development, hardware and IT services that an organization requires. None of these can work independently of the other; all work in cohesion to make a better IT system.

Advertisment

For a company the most valuable thing is its data and the information contained within it. With the digitization of information, a major function of computer hardware and software is to process and store this information. Hardware and software being expensive resources, information processing and storage should be done wisely to make judicious use of these valuable resources. 

A common impression that people have about software is “I can't make my business depend on computer applications. They are so unpredictable and have a fondness of failing at the time when they are required the most.” This gives rise to the need for enhancing the development process so that software delivers what is required to.

The computer hardware base, which includes desktops, notebooks, servers and network equipment, forms the framework to run applications. The more efficient, robust and scalable this base is, the better it will run your applications. 

Advertisment

Coming to IT services, there is a need to manage their life cycle since they are important for our systems and we pay dearly for them. 

All these components exhibit a life cycle and so we should look at their management at each stage of their lifetime. This way we can bring consistency between the various stages making the systems more flexible to change, efficient, scalable, reliable and productive. All this and more can be achieved while still keeping costs low. 

Why ILM?
n Explosive information growth requires effecient ways to manage information 
n Cost of information storage decreases as one moves down to slower media resulting in savings in IT budget
n With better access to information business can derive strategic advantages out of it
n Regulatory issues make it mandatory to store information for longer periods of time and a more transparent storage process
Advertisment

Finally, Shakespeare said, the world is a stage and we all come to perform our act and then go away. Similarly, every piece of hardware, software, information and every service, too, comes to an end. But, the IT infrastructure has to still carry on running the business. Old data, systems and services get replaced by new, starting a new life cycle, which has to be managed as well,

refreshing the entire process.

To know the various life-cycle stages of information, software, hardware and services and how to manage them at each stage, read on.

Information Life cycle (ILM)

Advertisment

Data can be defined as information (to be) processed and stored by a computer. This information could be anything: facts, figures, intelligence, charge, accusation, etc. Any organization will have its own set of information. At any given point in time some pieces of this information will be more valuable than the others. At another point in time, however, the previously more valuable pieces may become less valuable and the less valuable may become more valuable. This is an on-going process that starts as soon as any piece of information is created and continues until it is not needed at all and can be then destroyed. This makes the life cycle of information. 

In the modern world this information lies in the form of computer data, which uses valuable hardware and software resources for storage and processing. These storage and processing requirements of data also associate a cost factor to information along with the value that it carries. However, while the value of information increases and decreases with change in time, the cost associated with it remains static. So, there is a need to manage the relationship between cost and value of information during its lifetime. Thus, bringing us to a concept called ILM (Information Lifecycle Management). 

Advertisment

The governing rule of ILM is that the cost of processing and storing information should not be more than its value at a particular point in time. This rule can be applied to real-life solutions by understanding the fact that as the value of information declines, the need for it to be available readily and be accessible fast also decreases. Information with higher value, at a particular time, needs to be accessible faster than information with lower value. Typical examples could be current year's financial data that is required round the clock by different people in the organization. This is high-value information and should be accessible faster than, say, the previous years' sales figures, which are required only seldom during the day-to-day working of an organization, although it may become highly important at the day of budget meeting, but not the day after it. 

So, we present a very simplistic solution to manage the cost: value relationship of information. Have two levels of storage for your data, with the first level comprising expensive high-speed disks such as SCSI disks. The second level can have slow and cheaper disks, such as ATA disks. Keep all high-value information required more frequently on the high-speed disks and keep all low-value data required less frequently on slow-speed disks. You may not bother and your business may also not suffer if the low-value information takes some extra time to access. This may, however, not be the case for high-value information; you would like it to be accessible fast and immediately. So, we suggest keeping it on high-speed storage media. This way you will save on cost when you have some data on slow-speed disks as compared to when all data is stored on high-speed disks. 

Storage Hardware



Above we provided a very simplistic approach to managing information with two levels of storage hardware having different levels of performance and costs. But one can have various levels of storage hardware ranging from high-speed fiber-channel SANs (Storage Area Networks) to slow-speed tape drives, with in between things like NAS (Network Attached Storage), optical jukeboxes and disk array systems. One thing to remember here is that when we talk of slow-speed tape drives we are not necessarily talking of single tape drives and tape vaults, instead we are talking of tape libraries that provide huge storage capacities with complete automation for tape management. This eliminates manual human intervention for tape storage after writing and tape retrieval from vaults for reading into the tape drive. The keyword here is automation.

Advertisment

Management  software



But, then there is one more question to answer. Who will manage the movement of data from one level of storage to another and how will he determine when to move which data? The answer is software, which is a crucial component of an ILM solution. Specialized management software provide a single logical view of the different types of storage hardware and lets you set policies for automated movement of data from one storage level to another based on the value of the information in it, which is determined by many factors such as time, age of information, legal requirements, etc. The procedure is completely transparent to the user. He may not even know the exact location where a certain data item is stored. What he sees is a single logical view of data, while the management software takes care of the physical storage of data, governed by the policies defined by the organization. So, while the user may actually see two files stored at a single logical location, they may be actually stored at different places. Depending on the value of information contained in the two files, one may be accessible faster than the other, as it may be residing on a faster storage media as compared to the other low value file.

Vendors like EMC and HP provide various levels of storage hardware along with management software to manage information across the entire hardware spectrum.

HSM vs ILM



Before the term ILM came into the picture, the industry was talking of another term HSM (Hierarchical Storage Management). ILM today evolves from HSM only, but models the real world in a better way. In HSM, the only thing that's done is to move data to slower and cheaper media as it looses value with time. This is perfectly all right as information value decreases with time until it becomes nill, at what time it should be destroyed. However, it is also possible that the value of information may suddenly change from unimportant to critical, beacuse of any reason, e.g. legal issues. In which case it will be required more frequently and must be moved from slow-speed to high-speed storage. Such a situation highlights the biggest strength of ILM, which is movement of data across the storage spectrum according to the value contained in it and not only a downward movement with time as with HSM. As soon as a data item's value to the organization increases, the storage-management software automatically moves it to a higher level of performance in terms of speed of data access. All this is governed by the policies set in the management software and happens in a manner transparent to the users. Whenever the value of a data item decreases, it is again moved to slow performing and thus cheaper hardware.

Advertisment

What data to manage



This is an important question while deciding an ILM strategy. Generally, organizations tend to manage all the data that they have. But, this may not be the right decision. At a broad level, any organization will have two types of data: company data and personal data of the employees. 

The personal data of employees may not be important to your business, so you can safely ignore managing personal data. 



Your business data can also be classified as operational data, which should be stored at a high-speed media, and reference data, which should be stored on slow-speed media. Companies with huge amounts of operational and reference data, like finance, insurance, banking, health care, are the ones to get maximum benefits by having an ILM strategy. However, this does not mean that other organizations should not give a thought to ILM.

Effects of Technology Changes 



Another important thing to keep in mind before jumping in for an ILM strategy is the cost of technology changes in hardware. Changes in technology occur quite frequently in this industry and your existing infrastructure could become obsolete before you have utilized its full benefits. 

Buying new hardware means more cost. Migrating data from the old hardware to new means even more cost. So, an ILM strategy should look at whether the investments being made now will be able to provide the beneficial returns before new investments have to be made. It should not happen that the current hardware needs to be replaced before giving you the desired returns. This discussion applies to not only storage hardware but also any other hardware used in your organization. 

Software Life cycle (SLM)

In this highly competitive world and tough economic conditions, enterprises must have excellent business systems to maintain an edge over others. Today's business systems rely on software to provide efficiency and flexibility to an enterprise. There can be two types of software enterprises may use. They use a lot of readily available commercial software and customize it to their requirements. They also go in for software that is developed specifically for their requirements. This software development can be done internally or can be handed over to software-development companies. Whatever be the case, the task of developing and implementing software to support complex business processes on time and within budget has been less than satisfactory for many companies. Over that, users have become more demanding. They want higher quality at a lower cost. They want faster turnaround and greater customization. This calls for a change in the software-development model so that software provides customer satisfaction over a longer period of time. So the concept of SLM (Software Life-cycle Management) comes into the picture, which focuses on the complete cycle of application development. Theoretically, SLM deals with the development and delivery of software starting from the time when a requirement for it is created to the time when it is delivered and put to use. But the cycle does not end here. The phase after the application has been deployed is equally critical. During this time, the application would undergo improvements to incorporate continuously changing business requirements. In all this the most critical aspect is to have tight integration and synchronization between the various stages of the application development. Application development is an ever evolving process with new inputs and changes occurring at each level. There should be free flow of information between the various stages so that the complete life cycle is streamlined and adheres to user requirements.

Why slm?
n To improve quality by formally gathering requirements and ensuring the final application meets the needs of users
n To reduce cost by making developers follow a single set of best 



practices
n To cut maintenance time by keeping the application, design synchronized with the requirements
n To maximize the use of resources by making designers focus only on the business requirements and not on its low-level implementation
n To increase business flexibility by introducing better change management during the development lifecycle
n To provide faster development and a more proficient development team that adapts quickly to changing business needs

Application Life Cycle 



At a broader level software life cycle can be classified as a cycle of the following interrelated steps:


Requirement definition, design, development, testing, deployment and maintenance. The life cycle does not focus on the developer or the programmer alone but on the entire team involved in the development and deployment of the application, including the users who will use the application. The emphasis is on a tight integration between the various steps of the life cycle, so that there is synchronization and close integration between the various layers. This helps in developing applications that meet specifications more accurately. Having said that, let's look at the key issues involved during each phase of the application's lifecycle. 

Requirement specification, analysis and definition: This is the first step in the software-development cycle. Before starting off with the application development, it is necessary to specify what the application has to do. The users should be clear about what they expect from the application. The requirements specified by them are very crucial for the development team, because incomplete specifications will create problems for the users and developers alike, after the application has been built. Next, a careful analysis of the requirements is required to check the feasibility of it to be implemented in a computer program. If the desired functionality can be put into software, the requirements are formally gathered for the other teams to work according to it. These requirements provide a template for the rest of the team members to base the application on. A formal requirement definition also prevents expensive changes further down the life cycle.

Application design: This step creates the design of the application as per the requirements specified in the definition. It is always good to study the requirement definition properly and suggest whatever changes are required in the definition, as changes after this step will be more expensive. The design architects and requirement analysts should, hence, work in close conjunction to make the design blueprint. Further modeling tools should be used to model the design

Development: With the design blueprint ready, developers could start developing the applications. Developers and architects should coordinate at each level to make sure that the design is being implemented properly by the developers. 

Any changes made at either the design or the development should be synchronized so that the application development does not go out of the specification. Proper documentation and versioning control are also required at this stage. With off shore projects where geographically dislocated teams work on a single project, it is also important to have collaborative solutions which enable the different teams contribute to the single code base.

Test: This includes testing individual elements of the application as well as the complete application. The goal is to make sure that the developed application meets the desired specification, functionality and performance levels. The code should be implemented in an efficient and scalable manner. No unnecessary code should be sitting in the application, as it may be a point of error and may go unnoticed until the problem finally occurs. Scalability is also required to build robust systems that can scale appropriately as requirements change. 

Well documented requirement definitions help the team in designing tests for the applications after understanding the way the application will be used.

Deployment: At the time of deployment the team has to look at the platform on which the application will be working. If the application development is done accordingly, then the deplyment team can have the option to choose the right platform to be used for deployment. Factors like Performance, security, reliability, availability and low ongoing maintenance costs are all important.

Maintenance: This is an on-going process which includes training, troubleshooting, support, providing software updates, etc. 



The biggest problem here is that the maintenance team is always different from the development team, so it becomes a tough task for them to provide support for the applications. Therefore, proper documentation and well developed applications conforming to the requirements are required. The documentation should also provide backward traceability so the support team can track exactly which piece of software code was written by which developer. This helps in contacting the original developers in case it is required.

Management: During the above stages of the development life cycle the team members at different stages must communicate effectively with each other. This is required to manage changes that may occur at any stage. This calls for a change management system within the life cycle, because changes are inevitable during the development life cycle and such a system will make the process more flexible and will also reduce costs associated with changes at any stage. The software life cycle is not a linear process, after the final deployment the system is further refined and reworked bringing us back to the first stage again. This is a cyclical process and continues during the entire life of a particular application.

Hardware Life cycle (HLM) 

Managing your hardware IT infrastructure over its life emphasizes decision processes that influence cost and efficiency. Is your hardware delivering with the efficiency needed by your business, making it a worthy investment? Or is it becoming expensive to maintain? Quite understandably, the decision to change hardware is based on specific business functional requirements along with economic and technical feasibility.

The hardware infrastructure doesn't just include desktops, although that's the visible face of it, being the most abundant of the lot. It also includes servers, networking equipment (like switches, routers and firewalls), power backup, storage infrastructure (like SAN, NAS and tape drives). Over a period of time, these degrade or become insufficient to meet burgeoning business requirements. Clearly, this restricts efficiency as well as impedes growth. In such a situation the company has to decide whether to upgrade the old equipment, wherever possible, or purchase/lease new equipments. If it's the latter case, then the question of what to do with the old equipment comes in. The organizations can sell/auction them, lease them out or move them to another line of operation where they are sufficient to meet the requirements. For example, in case of desktops, if existing hardware becomes obsolete for a software developer, it would still be adequate for front desk or data-entry operations.

Besides this, one must also look at the track history of specific hardware. How many times has it been serviced? Has it been a constant problem to maintain it, thereby increasing your support costs? If so, then it might be better to simply replace it with something new. 

In case of servers, most companies today face the problem of server proliferation, ie, too many servers. This poses the problem of managing them and can also be quite costly to maintain. In such a case, a company has to decide the feasibility of buying a large, powerful, multi-processor server instead of having multiple smaller servers. Or else, there's the option of shifting to rack servers instead of using multiple standalone ones. If floor space is at a premium, then the company could opt for blades. Here again, the problem of what to do with the old servers comes in. The decision here would be based on the condition of older servers. They could possibly be moved to the job of being backup servers, or moved into different departments depending upon specific requirements. 

Like desktops and servers, an organization must take into account the lifecycle of the hardware components that we just mentioned. Hardware lifecycle management encompasses this entire process. The cardinal objective is to deliver quality systems within the stipulated time and within the cost constraints using an identifiable, measurable and repeatable process. While this entire process would vary for different organizations based on their needs, it has broadly been divided into five stages by some hardware industry leaders— Plan, Purchase, Deploy, Maintain and Refresh.

Why Hlm?
n Lower total cost of ownership
n Access to latest relevant

technologies
n Easier maintenance as frequency of problems increases after a particular time period
n High efficiency as users alsways get the desired amount of performance from hardware
n Better control on hardware assets
n More reliability as the time frame for occurence of problems can be predicted

PLAN



This phase would include asset management of the hardware, which covers inventory management, and acquiring data about the hardware and software in your organization. On the basis of this, reports are generated, which help in taking future purchase decisions. For those who've been through it, you'll understand what a nightmare this can be. Hardware is constantly undergoing adds, moves, and changes. While it's very easy to make the inventory of a new machine, it's equally difficult to track the changes it has gone through after one or two years. Nevertheless, hardware inventory management is essential, and it would imply improved efficiency as well as better maintenance of valuable assets. For instance, a ready report of your current hardware inventory would help to decide what hardware to buy afresh, what to upgrade, etc. This would further help in deciding what form of financing methodology to use, whether its own capital or other financing instruments. It would also help in identifying what technology to buy. 

PURCHASE



Purchase decisions have to be made keeping in mind the TCO (Total Cost of Ownership) and the ROI (Return on Investment) expected. Deferring investments either due to the speculation that technology changes will take place or hardware prices will fall would not hold too much weight. Purely because drastic technological changes don't take place at such a short span or even if they do they come with an enormous price differential. Then the question of “is the price worth the performance” for current and future needs come into play. Also the optimism that hardware prices are expected to fall in future would require ample financial evaluation. Often the cost saving realized due to this gets negated by the loss due to running of insufficient machines in that duration. Purchase decisions are difficult and require a balanced view of what you currently have and what you really need for the future. 

DEPLOY



After new purchases have been done, this phase is about deploying your applications on the new hardware. It covers initial system setup, updating the settings on the new systems, reporting of system operation, and manual as well as unattended monitoring. The deployment face is important in that you must ensure that the new equipment is configured to handle your existing applications without hampering business operations. It's therefore recommended to a test run of mission critical hardware deployment before putting it into the main system. Software deployment and data migration would be of vital importance in this phase. 

This means finding out which software is mandatory, which is optional, and schedule when it is to be delivered. It also encompasses capturing, deployment, development and management of software images for automated software configuration. A step further would be the distribution of applications and drivers over the network. Then it involves copying of one system's image and deploying to another. 

Finally there is transfer of personal files and settings from one machine to another. Making configuration simplification and image assistance is a part of the process.

MAINTAIN



This is the most difficult and lengthiest phase of all, as it involves constant monitoring and maintenance of the hardware. Like the deployment phase, here also there are the processes of hardware configuration and system health and software deployment and migration. Clearly this phase includes the overall well being of the array of machines and would inevitably include hardware, software and image updates along with service support and inventory management.

REFRESH



Finally it's in this particular phase that the entire cycle starts all over again with fresh requirements and new techniques. Similar to the planning phase here also there is ample scope of asset management of the hardware and the process would also be similar. If any migration is required then it can also be tackled at this stage. If there is any kind of asset recovery from other sources then it should be done and made part of the fresh cycle. There is a plethora of the benefits of life-cycle management. Chiefly, it includes delivering quality systems, which will drive business application, with the current infrastructure and is cost effective along with providing better maintenance and easy enhancements. In addition it also ensures increased automation, easy identification of risks, better accountability, establishment of appropriate levels of management authority and much more.



Several vendors have come up with hardware lifecycle management tools. Dell has come up with tools that are supposed to significantly reduce total cost of ownership. HP, Intel and some others also have come up with such solutions. With growing business requirements and increasing hardware install base the need for it has only been increasing manifold day by day. The sooner it's implemented the better it would be for the organization.

Service Life cycle

Unlike hardware, software and information, where you have something concrete in hand to manage, services are more abstract. They're mostly an outsourced component, largely governed by contracts and agreements. This makes them all the more difficult to manage, since you first need to assess your requirements, take the service from some provider and after that constantly monitor its consistent delivery. The process does not end here but goes for a complete refresh after a particular service term is over. Services comprise those jobs that your organization doesn't have the resources to provide, or perhaps it's more cost effective to get them outsourced. Maintenance contracts for network and hardware infrastructure, bandwidth for Internet connectivity and web hosting for company's website are some of the services that an organization takes from others. Then there are services that organizations avail based on their specific business needs. A common example of this is application services which are provided by an ASP (Application Service Provider), wherein a vendor leases out specific business apps like ERP or e-business suite, etc for a particular period to an organization. There are specific issues that need to be tackled with each service that you take and utilize, and since managing their life cycle is a long-term decision, organizations must plan out very carefully. Let's look at some of the common services and the issues associated with them.

Hardware and Network Maintenance



Let's consider an AMC (Annual Maintenance Contract) for network and hardware resources. Often there are issues on the level of service provided, despite having service level agreements or SLAs. An AMC can be of two types comprehensive and non-comprehensive. A comprehensive AMC covers your entire infrastructure that includes replacement, repairing and maintenance of hardware and software. While in a non-comprehensive AMC the service provider merely identifies the problem and then it is up to you to get it solved. In cases where the number of systems to be maintained is very huge, the organization should ask the service provider to provide resident engineers, so that you don't have to call the service provider every time you face a problem. Having said that, the kind of AMC you go in for depends on the needs of your organization and the budget. 

All the above issues should be addressed while writing the SLA along with the penalties for not adhering to the SLA.



Another issue when we talk about life cycle management of your hardware and network is that you need to continuously evaluate the service provided by your service provider and verify that the terms and conditions of the SLA are being adhered to. You can base your check on the user feedback, or the kind of information base created by the provider by filing in reports of the repair or replacement work performed.

Bandwidth Management



A company's requirement for bandwidth grows with its size and needs. Your bandwidth requirements depend on factors such as the kind of applications you need to run, the kind of usage your employees have i.e. whether they require heavy downloads or not. So with growing number of employees and number of application your bandwidth requirements also grow. The first decision that an organization has to take while taking bandwidth is whether to go in for a leased line, ISDN or a VSAT connection. 

There are many factors that govern this decision for instance if your office is at a remote location then you might go in for a VSAT connection. An ISDN connection might turn out to be more expensive, while leased line is the preferred method for many. Then there is the decision for the amount of bandwidth to be taken, which is a function of your needs and budget. Once the organization decides the kind of technology and the amount of bandwidth to be taken, the issue of choosing the service provider arises. This is where the factor of price and vendor's brand name come into picture. Once the service provider is decided upon, an SLA has to be penned down. 

After this the important task is to monitor the level of service being provided at any time of the day using various hardware and software bandwidth monitoring tools, and nail the service provider if the promised level of service is not being provided. If the level of service is not satisfactory over a period of time then you might want to switch to another service provider. The other thing to keep in mind to improve the life cycle of your bandwidth is to make sure that there is no improper usage of the available bandwidth. For which you should check whether the users are not making unnecessary heavy downloads or running unnecessary application, because this might give you a false impression that the amount of bandwidth currently being used is not enough. All this asks for a proper internet usage policy to be in place.

Web Hosting



Every organization, big or small will have its web site and in most cases organizations take web server space from a web hosting service provider or an ISP. There are some questions which need to be answered before going for a particular provider. The amount of storage space, the kind of platform, whether it is on Linux or Windows, database support, etc are all critical issues. Then there are issues of data security, availability of service, ease of management, etc. Once you have chosen a particular provider, changing business requirements brings in new challenges for the organizations. For example if you are hosting static web content then your requirements from the service provider are not much. But then with time if the company now needs to host dynamic web content with database support, the requirements and the cost associated would go up. You have to see that the current service provider is able to meet those requirements along with maintaining quality of service.

Otherwise you will switch to another service provider who can match your needs better.

In conclusion, evaluation of the services you use against your needs is a continuous process and has to be done periodically. If the current service provider is not able to meet the growing requirements, then you might have to switch to another service

By Ankit kAwatra, anoop Mangla, sudarshana mishra

The Big Slm Solutions 

Borland



Borland offers solutions for all stages of SLM (Borland calls it ALM, Application Life Cycle Management), which are tightly integrated and allow free flow of information between the various stages. They have tools like CaliberRM for requirement definition, Together for design that uses Unified Modeling Language to create diagrams. The diagrams can be directly represented in program code for either Java and J2EE or C# and .Net. For development, Borland has the well-known JBuilder, C++ Builder, C# Builder, Delphi, etc. Borland Optimizeit Suite and Optimizeit ServerTrace are testing and profiling tools, which go further than many performance testing technologies by helping developers profile their applications and optimize performance during the development process. Among the benefits of standards-based platforms such as Java is the freedom teams have to choose among different application servers for deployment. Borland Enterprise Server delivers performance and security along with adherence to J2EE specifications. Borland StarTeam is a comprehensive software configuration management solution that uses a central repository to facilitate communication among team members responsible for the various tasks.

IBM rational



IBM Rational also has a complete array of tools for the various stages of ALM. Products like IBM Rational Requisite Pro, Rose Data Modeler and Rose XDE Modeler are for requirement definition and analysis. Rational Rapid Developer, Rose Technical Developer, Rose XDE Developer are for design and construction of apps and work with various languages and development environments. For automated testing there are a lot of tools like Rational Functional Tester for Java and Web, Performance Tester and Rational Robot. Then there are tools for life-cycle management like Rational Team Unifying Platform, Rational Suite family, which is a life cycle solution for cross-functional team development and Rational SoDA for

documentation. IBM has solutions for mainframes also.

While different tools from a vendor can work closely, there is little integration between solutions from different vendors. So, the need is to have tools from different vendors talk to each other. This will make developers choose a tool that will work with their existing solution and tools that they may acquire in the future, irrespective of the vendor they take it from.

Advertisment