Advertisment

Data Management in Insurance Industry

author-image
PCQ Bureau
New Update

Data is all pervasive — it begins much earlier than the initial stages of
client understanding and diligence, and extends far beyond helping revenue
generation, encompassing cross and up-selling products or services. It also
helps to understand the business risks and verify whether the regulatory
compliance needs are met. The insurance industry  depends on promises made on
paper, which are eventually converted into supporting databases and document
repositories. This article elaborates on the types of data, modes of data
acquisition, data checks and usage, and the prevalent techniques for data
management.

Advertisment

Data sources

Insurance industry's data can broadly be classified as employee-related,
distribution-related, customer-related, product-related, operations- related and
accounting-related. Of these categories, employee-related data is required
purely for internal workforce operations management and the rest have a direct
impact on the cost and revenue of the insurance company.

All data is collected and stored in databases, data warehouses and as
documents or images.

Direct Hit!

Applies To: Database admins

Price: NA

USP: Learn the techniques in managing
insurance data effectively

Primary Link: None

Keywords: ICO model
Advertisment

Data management stages

Management of data could be defined in three major stages: Data Acquisition,
Data Quality Management, and Data Exploitation or Data Utilization. Let us look
at these in detail:

  1. Data acquisition results from new business management,
    internal operations (HR, accounting, distribution and product & policy
    management systems). These are made available in unique respective data
    structures, in an integrated way. One step up, they can be consolidated into
    data warehouses and document management systems, jointly referred to as the
    universe of the insurance enterprise data.

  2. Data exploitation could be done to cater to different
    needs like planning or analyzing growth of revenue, cost control, improving
    efficiency of operations, planning and executing business expansions,
    conceptualizing new products, and to provide data-related services to
    customers, distribution networks and employees.

  3. Data Quality Management:Most
    of the big insurance enterprises have been operational for several decades and
    hence the data available with them may not be 100% accurate. Many such
    insurance enterprises still use green screens for systems support and policy
    administration. Data quality could be maintained and ensured, by continuously
    checking, correcting and preventing data errors, thereby making data ready for
    exploitation.

The link between data acquisition, data quality management
and data utilization could be described in the ICO (Input-Check-Output) model.

Advertisment

Let us look into the details of its individual components:

Data Acquisition (Input)

Structured data acquisition is critical to perform all subsequent
data-related functions in an efficient and integrated manner. Data that is
unstructured and not collected in databases is likely to create vacuums in data
analysis. In today's insurance industry, data acquisition happens in five
different broad segments:

Advertisment
  1. Customer data:Customer
    relationship management, customer self service portals, new business
    management systems and other customer touch point systems are the sources for
    acquiring this data. It comprises of customer's personal data such as family,
    contact, activities, complaints, service requests, financial, health, campaign
    offers, policies, loans and benefits info. This group of data is generally
    administered in CRM, customer portals, and IVRS.

  2. Distribution data: Distribution administration,
    sales & service management, compensation, compliance and other distribution
    touch point systems are the sources for acquiring this data. This group of
    data is generally administered in Distribution or Channel management systems,
    IVRS, FNA, quotation, applications and compliance management systems.

  3. Policy administration data:New
    business, underwriting management, claims, accounting and actuarial data are
    the sources for acquiring this data. It comprises of financial needs analysis,
    quotes, new business applications, cashier entries, lock/collection boxes,
    accounting, valuation, loss ratios, document images, turn around time,
    underwriting, claims and policy services info.  This group of data is
    generally administered in legacy policy administration, claims, accounting and
    actuarial systems; however, there could be number of separate systems for
    underwriting, policy services and new business support systems.

  4. Product administration data: Product
    administration and pricing are the sources for acquiring this data. It
    comprises of product setup & management, profiling, pricing, profitability and
    product performance. A very few industries maintain market research data too.
    This group of data is generally administered in product management systems,
    actuarial systems, DWH and data marts.

  5. Employee data: It comprises of employee personal
    details such as contacts, activities, payroll, education qualifications,
    certifications, credentials, job history, training and development info. This
    group of data is generally administered in HRMS; however, in some cases there
    may be separate payroll and training & development systems.

Missing, unstructured or disintegrated data acquired in any
of the above five categories would create a gap in the data management chain and
hence it is recommended to fill up these gaps diligently.

Data Quality Management (Check)

Data acquired through various systems and databases needs to be checked for
desired quality before being exploited. Data quality errors could result from
inadequate verification of data stored in legacy systems, non-validated data
leaks from the front end, inadequate integration, redundant data sources /
stores, direct back-end updates, etc. In today's insurance Industry, data
quality management is mostly ignored. Where implemented, it is done in one of
the two ways described below.

Advertisment

Unstructured approach

Most enterprises rely on a few batch programs to check some portions of the
data acquired, and most of the times, these programs are triggered by a serious
problem identified in customer or financial data. Some enterprises schedule
these batch runs and some still pursue to run only on demand. Such intermittent
and unorganized batch runs can neither help to scale or integrate, nor make an
impressive improvement to the overall data quality of the enterprise.

Structured approach

Structured data quality management, greatly contributes to scale up,
integrate and thus create a big impact to the overall enterprise data quality. A
structured data quality management model would pass through the following
stages:

  • Extraction of data from source and/or target systems.

  • Run quality checks for identifying data transfer errors,
    data link/reference errors and domain integrity errors.

  • Create a data quality mart to keep all the error records
    and all error-related details, to help in tracking and monitoring the aging of
    the problem and to do other analyses.

  • Integrate the data quality errors into problem/incident
    trackers so that closures can be tracked.

  • Provide online data quality error reports to the data
    owners along with its aging so that they can be fixed by them.

Advertisment

The data volume, sensitivity/criticality and the data
quality error exposure risks play a vital part in designing the right frequency
to run, level of follow up and escalations settings, etc.

The data quality errors are critical to be fixed &
prevented in time so that businesses can stop revenue/opportunity losses, cut
additional recovery expenses and build confidence of all stakeholders in the
value chain. (There would be a separate paper discussing in detail on the
evaluation of the existing data quality management along with gaps to help
insurance industries to implement a proper data quality management system.)

Data Exploitation (Output)

Data acquired and checked thoroughly, is ready for exploitation. Data
exploitation is the key stage which, if properly done, will help to reap the
benefits of efficient data management. In other words, this is the value
generation stage - which includes revenue growth, cost savings, operational
efficiency gains, risk controls, etc, which are very critical for any business.
This stage is also viewed as the information management stage. In Insurance
industry today, the data exploitation which is the Output stage of the data
management, is done in one of the two ways described below:

Advertisment

Legacy approach

Most enterprises extract data or information required on an ad hoc basis
from their operational systems and use their applications or batch programs to
generate some reports to help in decision making. This method is not sustainable
when the demand grows or multi-dimensional needs come up or when data becomes
voluminous. Moreover, data users need to trail behind a big Q number which might
render it too late to initiate desirable action on an issue for which data was
originally extracted.

Structured approach

With the advantages of structured information management already reinforced
in the last couple of paragraphs, an enterprise would be easily able to adapt to
any volume or time challenges, thus creating a big impact to the overall
information needs that are critical to the functioning and growth of the
enterprise. Structured information management implementation can be done as laid
down below:

  • Enterprise Data Ware House (EDWH): Most enterprise data,
    which is called universe, needs to be extracted, loaded and transformed for
    information needs, and then segmented for summaries and details.

  • Data Marts: Specific business functions (for example —
    accounting, compliance, etc) can have their data marts to address the key
    business problems in their functions.

  • Reporting Needs: Detail lists and structured (authored
    and custom) reports can be published from DWH, data marts and operational data
    stores.

  • Analysis Needs: Summaries need to be done with
    appropriate dimensions and measures to enable multi-dimensional analysis from
    DWH and data marts.

Information management should be viewed from the
perspective of enterprise needs that would cover all functions of the enterprise
that would minimally or majorly impact the business. All functions of the
enterprise can be seamlessly integrated through suitable enterprise information
management systems.

Frequency of refreshing the EDWH and data marts, extent of
data integration, efficiency summaries depend on the business need or pace;
hence, they need to be worked out during the design stage. The data needs to be
exploited by creating data marts, reports and analysis to bring value to the
enterprise.

Conclusion

It is recommended that Insurance industries do a stock check of their data
management implementation at all three stages: data acquisition, data quality
management and data exploitation. The value of data management should be clearly
understood and structured approaches need to be adopted at all stages. With
these implemented, an enterprise can make informed decisions, refrain from
information starving, remain highly integrated and scalable, and most
importantly, stay ahead of competition.

Rajasekar Ramanathan, AVP & DH - Foreign Life & General Insurance, AIGSS

Advertisment

Stay connected with us through our social media channels for the latest updates and news!

Follow us: