Advertisment

How to Keep your Applications Healthy

author-image
PCQ Bureau
New Update

Software applications are the backbone of any organization's commercial

activities. Irrespective of size, each company depends on various software

applications to accomplish every aspect of their business in today's global

world. Companies invest millions on implementing the best possible enterprise

software in their premises. But soon after deployment, unless applications are

managed continuously, it can lead to downtime, causing business loss.

Advertisment

Rising complexity intensifies the challenge of optimizing application

performance. Enterprise application management brings its own share of complex

issues and challenges, the sheer abundance of them being the most pertinent.

With IT teams juggling demands of hundreds of internally implemented

applications, it is not easy to manage and utilize them to the optimum level.

Importantly once an application has been deployed it is costly and also

difficult to make material changes to the application.

The ultimate result is that while enterprise applications continue to become

more important to the operation of the business, issues with their performance

become more prevalent and difficult to identify, and resolve. Recognizing the

importance of proper monitoring and management of applications, we will focus on

some of the best available tools for application management and performance

monitoring and also tell you some of the best practices for application

management.

Advertisment

An application monitoring tool can be a piece of software or an appliance

that continuously monitors, diagnoses and informs about problems that slow down

applications you are running.

Application performance relates to the speed as in how fast transactions are

completed or the information that is getting delivered to the end user by the

application via a particular network, application and Web services

infrastructure. The idea here is to detect and resolve the problem before users

start experiencing any difficulties or poor performance.

Spotlight shows the activities

of disk, memory and SQL processes in real-time for performance monitoring.

Also the I/O speed is tracked for logical and physical data reads
Advertisment

Database optimization tools



With increasing size of data, databases tend to get sluggish over time.

Database optimization tools are used to improve their performance. As the size

of a database increases, it results in a series of logs to be created; this in

turn increases the overhead. These tools try to do diagnostics for the database

by reviewing parameters such as log, cache size and shared pool size. Such tools

help the DBA to tune up the performance of the database along with the option of

monitoring the database and its space consumption for logs and buffer.

For performance tuning, these tools follow different algorithms based on the

type of database server. For eg, a tool called, DB Tuning Expert for Oracle,

tunes up the crucial parameters in Oracle to get optimal performance. Also by

reclaiming the unused space in data storage and by clearing up the log file,

database performance can be improved.

The memory view of the server

instance shows the buffer cache information along with graphs for Hit Rates

and Page Allocation on time scale
Advertisment

Web application optimization tools



Whenever Web application optimization is referred, most of us confuse it

with Web optimization appliances. But there are software tools available that

perform the same task. Cache control is one technique used by such tools for

optimizing web apps. Using cache control tools you can limit cache size, create

rules for objects residing in cache to be updated or deleted, and synchronize

cache memory spread across multiple servers. HTTP Compression is one of the most

popularly used techniques for optimizing Web based apps over the WAN. It can be

done at Web server as well as browser level. At server level you can either keep

Web content in a pre-compressed format or you can use a third party software to

dynamically compress the content.

Quest Management Suite for SQL Server



DBAs now have to cope with ever increasing responsibilities of managing the

SQL Servers, as more and more business critical application data gets stored on

them. They have to meet the challenge of managing the database environment that

not only increase in volume but also increase in complexity. The Quest

Management Suite is a set of tools that can help a DBA to manage, monitor and

diagnose problems on his SQL Server. The suite consists of following:

  • LiteSpeed, which is a backup and recovery tool.
  • Capacity Manager, which is storage and resource planning tool
  • Spotlight, a real-time performance diagnostic tool.
Advertisment

Here we will focus on Spotlight and see how a DBA can benefit from the

performance monitoring of the SQL Server.

Spotlight



A database administrator always tries to keep the database up and running,

but he can never be sure about bottlenecks that would hamper database

performance. In such a scenario, using manual techniques to diagnose and resolve

the bottleneck becomes hard for the DBA. QuestSoftware's Spotlight on SQL Server

is a tool that can help a DBA in resolving such bottlenecks, and also help him

to monitor the SQL Server, to identify and eliminate the situations where such

bottlenecks could arise. Spotlight is a database performance-monitoring tool

that allows a DBA to observe the actual database activity on a real time basis

in a graphical interface.

Configuration and use



For configuring Spotlight for the SQL Server, you have to specify a working

database that would be used by Spotlight to maintain monitoring counters and

logs. When the configuration step has been completed, a DBA can create

connections to the SQL Server that Spotlight would monitor for performance. On

the main screen, Spotlight presents a graphical representation of activities

occurring amongst the components of SQL Server. The DBA can view components such

as disk storage, memory and SQL processes on the main screen and also view data

flow rates amongst these components. The representation of database server

activities by Spotlight is done on a real-time basis. So, whenever any

bottleneck is about to occur, the DBA monitoring the server over Spotlight can

determine the problem area and resolve it even before the bottleneck takes

effect.

Advertisment

Spotlight also does a calibration process periodically that automatically

sets a baseline for the server-based on performance parameters such as Cache Hit

Ratio, Latency Period, etc. This allows it to set the speed of internal data

flow and other activity counters like cache size, log buffer size, etc. When

this threshold gets crossed, it sends an alert upon which the DBA can take

appropriate action.

The Buffer Cache Hit Ratio shows the percentage of the logical reads

satisfied by the data already in Buffer Cache. The DBA had to calculate these

ratios through some SQL queries or through native tools earlier, but with

Spotlight he can have this information on real-time basis. This is critical

information, and as the Hit Ratio goes down, the DBA can increase or clear the

cache to maintain performance.

Spotlight also offers the option of viewing status and graphs for memory, SQL

activities and database. Under Memory view, the buffer cache of each database

object and the page allocation can be monitored. The SQL Activity view shows the

current response time for data queries, Cache Hit rates and CPU utilization

information in graphical format. Spotlight also maintains an Error log, which

can be used to pinpoint the reason for any server bottleneck. The DBA can also

keep record of how many active sessions are there to the server and how many

users are currently accessing the server. Spotlight is an important tool for a

DBA to keep the database server up and running, by eliminating bottlenecks

before they can happen.

Advertisment
With the help of AppWatch one

can view the error message, which will be exactly what the user will be

facing

Chroniker AppWatch



At times system administrators receive the message that an application is

taking more than the required time to respond even though the database and

application are running fine. Hence, it becomes difficult to identify where

actually the application is taking time to respond.

Chroniker AppWatch provides you with a solution for the same. It's a

performance testing software that allows you to monitor the application, tells

you its response time and what the end user experience is. Using this software,

one can easily find the exact point where the application is taking time. For

example, does it take time to load the application or is it taking time in

querying the database. It also provides you with 'analysis reports' such as SLA

reports. It automatically generates reports such as the 'n' slowest tasks per

month, 10 least available tasks per month, etc.

The interesting fact about this software is that it can simulate the real

user behavior. It automatically finds icons on the desktop even if the position

of the icon is changed, as it captures and recognizes a Windows object just as a

human does. Also when a Web page is getting loaded, you can configure the

software to wait until the page is fully loaded so that a particular action can

be taken, such as putting search string after the required text box is loaded

completely. Moreover, its user friendly interface lets you design test suites

without having prior scripting knowledge.

Here, we show how it can be done. Before starting the design of the test

suite, one has to note down what steps will evolve during the test. For example,

to open a particular application you have to find the respective shortcut on the

desktop and then double click on it. For writing the test suite, open up the

'Scenario Builder' form and Start > All Programs > Chroniker > Scenario Station.

For convenience, we will load an already written script, which can be found

under the 'script' directory named 'NrgWebsite.csc.' Save it by another name say

'IExplorer.' Now for using this test suite remotely, one needs to register it to

'Chroniker base'. For this, go to Tools menu and then click on 'Register

Scenario'. Now open Internet Explorer on any other computer which is attached to

the same network and open the page http://:8888/, where ''

is the IP of the system where Chroniker software is installed. When the Web page

gets loaded, navigate to Modules > Applications. Here, all scenarios are listed

along with the number of transactions they have and their status. Moreover, when

an application fails to execute, you can view the screen shot of the page where

the error has occurred.

To run the scenario immediately, click on 'Run this scenario now' icon on the

row where scenarios are listed. A window will appear which will show you results

such as response time, after executing the scenario.

Using the Chroniker AppWatch

browser interface one can remotely keep track of all the scenarios including

the overall status

End user measurement



A key concern for any organization is effective maintenance of applications

so that the end user doesn't face any problems while using them. To maintain

high level of online service quality, organizations must adopt an application

and service management strategy that helps companies ensure that end users

receive the best possible experience while using them. Some key areas must be

addressed from the user perspective are: capabilities to measure application

performance and user experience; and understanding of the usage level, usage

pattern, and content analysis, right down to the individual user level. Like

wise, there can be several other methods to determine key parameters of an

application from an end-user perspective. For eg, user performance measurement

can enable real time monitoring of user activities and individual user activity

can be analyzed for problem detection and diagnosis for quick resolution. Live

Session Capture and Replay helps to capture, search and store each end user's

actual Web experience.

One can track what a user did and how the system responded. Report on the

service level of synthetic transaction by business processes, geographic

location or time period is again one of the key methods to understand the end

user perspective. Another key point is capacity determination so as to identify

bottlenecks such as ineffective load balancing and poorly performing servers.

Best Practices in Application

Management

Following best

practices provide a framework for achieving results efficiently.



Right now application performance management is more of reactive
troubleshooting than proactive. So when an application slowdown is reported,

the IT staff determines the cause behind the problem and tries to minimize

the operational and business impact from the slowdown. However, there are

measures that can be taken to make the process of application management

more proactive. Here are a few practices that can be adopted:

Step 1: Baseline- Under normal

conditions both network and individually critical applications should be

baselined to determine performance parameters. Whenever an application

performance problem is reported, the baseline can instantly offer data for

comparison.

Step 2: Application Flow Analysis-

This involves analyzing the application during the flow level, i.e. during

an application conversation, as opposed to packet level and then presenting

the summary statistic on the most important aspects of its performance.

Hence, by interpreting flow-level data into actionable information it helps

maximize efficiency and minimize response time.

Step 3: Categorizing and isolating the

problem
— It's important to first understand the type of problem and

then categorize it for isolation. There can be several reasons for the

problem, hence categorization is important.

Causes for trouble can be application code,

which if written inefficiently is bound to create a negative impact,

irrespective of whether the application is transactional or streaming in

nature or performs bulk file transfer. The problem will be evident if

monitored by an application management system. Another cause for the problem

could be network infrastructure.

Before application performance management

solutions came into picture, inadequate bandwidth was considered to be the

primary cause of poor application performance. But now with companies

investing significantly on bandwidth improvement it has been clearly found

out that this doesn't have any connection with poor application performance.

Another reason for the problem could be

understanding of protocols. At times inefficient network protocol is behind

an application performance problem. An application performance management

tool can help determine such kind of issues and facilitate better

understanding of how a protocol works. This in turn helps developers to tune

the protocol, making it better for the application.

Sometimes an underpowered server, outdated

operating system, clients running unauthorized software, or cycle —

consuming activities such as unscheduled backups, can also be the source of

slow application performance. Application performance tools can identify

this problem and intimate the IT staff about the same.

The advanced application management

solution allows taking data which they generate and convert them into a more

illustrated, comprehensive report which helps the IT team to maintain a

track record of the behavior of certain applications. These in turn help

them understand the applications and device proper maintenance schedule

depending on their behavior.

Understanding the end user perspective and managing the same result in

several benefits for the organization, such as maximizing application

productivity or profit potential by understanding user behavior, think-time and

navigation path. Also you can learn how your users are using the application

through usage reports and trend analysis. By optimizing the application

potential you will be enhancing overall user experience. Another key benefit is

the practice of proactive service level management by aligning IT service

delivery to initiatives and goals.

Advertisment