May 11, 2021
The increasing number of structured and unstructured data sources, as well as the accuracy of those data source, has completely changed the role of big data in commercial underwriting over the last 3 years.
Therefore, the need to harness big data to be more informed and accurate with predictive analytics is becoming the highest of priorities for commercial insurance carriers.
95% of businesses cite the need to manage unstructured data as a problem for their business.
However, the biggest challenge for any business is trusting the data from multiple sources, which sometimes is also inconsistent. Consequently, underwriters become apprehensive about the veracity of the data that they work with.
Data that is not triangulated or verified can lead to premium leakage and inaccurate analysis of risk in commercial underwriting.
Underwriters and agents are still apprehensive to trust big data when underwriting a risk or issuing a policy. However, with the amount of data available, it is becoming impossible to avoid such a force of technology in insurance analytics. More and more insurance carriers are starting to use Predictive Analytics for accurate underwriting, premium pricing, rating and detecting fraudulent claims. According to KPMG’s recent report, “Data is not simply the facilitator for better underwriting and keener pricing, but is the very DNA of the 21st-century connected organisation.”
Fake reviews in social media sites and instances of jailbreaks of devices are no reason to avoid using big data in insurance. There is fraud in almost any system.
Data scientists and big data engineers are known to swear by ‘Veracity’ of big data and for this reason, insists on cleaning data sets. However, with streaming big data, the longer you wait to clean the data, the more quickly it decays.
Instead of donning hand gloves and masks to clean and sanitize data, a better way to ensure its credibility is to assume that all of it is wrong! Yes, that’s right, assume that all of the big data you are working with is infected and work backwards from there.
Essentially, your focus is NOT on establishing a single source of truth, but rather on identifying those strains of truth which are likely present in the data by using a method called triangulation. To illustrate further, just because you trust revenue from source A, does not mean you should trust employee count from the same source.
As a term, triangulation has its origins in qualitative research. And, in the context of big data, it is used to verify the accuracy of a data source by corroborating it with two similar or disparate elements.
And of course, in the case of big data, triangulation can only be implemented with a machine learning algorithm. Machines alone can handle the volume and complexity of the data, and become smarter over time.
Ensure the bedrocks of underwriting and customer submission data is accurate.
To start from the fundamentals, you will have to ensure that all the data for your base rates for risks are correct. This includes simple validations that need to be performed on full-time equivalent (FTE), insured location (we have seen latitude/longitude being off by even 100 miles), SIC/NAIC codes, year founded, who the directors are of the company, etc. The problem however is that they are off systematically by 15-20% based on the books we have run. So, before even trying to get an edge on unstructured data, the basics of triangulation can help you right from the start.
Confirm loyal customers and reward them.
Insurance has been notorious when it comes to rewarding loyal customers. With the use of big data that has been verified over time, repeat customers can finally get their dues.
Establish a culture of evidence and enables transparency with customers.
The current underwriting and claims payout mechanism are mired in a lot of manual verifications and involves blind faith on the part of customers as well as insurers. By using reliable big data, insurers can procure evidence in a non-invasive manner and enable a better customer experience.
Just as the number of data sources and the amount of data generated is increasing exponentially each year, so is the need to embrace big data in your underwriting and predictive analytics in insurance. However, it may not always be as easy to let go and trust the data directly from its source, which is why it’s so important to have a triangulation method in place for peace of mind of embracing these data sources. To truly experience triangulation and to see how you can leverage analytics in your underwriting, get on to Intellect Risk Analyst today.