Other parts of this series:
AI’s general impact on society is growing rapidly, and in some ways the insurance sector is at the forefront of the change. Some of the most interesting findings of the 2018 Technology Vision for Insurance pertain to artificial intelligence (AI) in the industry. Four out of five surveyed insurance leaders believe that within two years AI will work next to humans in their organization as a co-worker, collaborator, and trusted advisor.
Before we go any further, it’s worth defining our terms. What do we mean by “AI”? This is a surprisingly mutable concept that has shifted over time. At Accenture, when we say “AI,” we’re referring to any collection of advanced technologies that allow machines to sense, comprehend, act, and learn. In contrast to earlier systems, today’s learning-based AI systems raise questions and challenges more commonly seen in the world of human education.
AI-based decisions and tools are starting to have a profound impact on people’s lives and on the business of insurers. This is a powerful technology that cannot be regarded as a simple software tool if it’s to be trusted to make decisions that affect the lives of customers, employees, and others in an insurer’s ecosystem. Insurers need to make sure that AI is a responsible “citizen” to make full use of this powerful technology.
An AI that is “taught” in this way can serve an insurer as a new worker than can be scaled across operations. For instance, a large North American home and motor insurer is using deep learning to teach an algorithm to recognize whether a car is undamaged, damaged or written off, based on pictures taken with a mobile camera. This algorithm continuously learns as it processes new cases, increasing its accuracy over time.
So how can an AI be taught the right behaviors? It starts with access to the right data, and lots of it. Insurers with better data in larger amounts will be able to train more capable AI systems. Insurance data scientists will need to use care when selecting training data and taxonomies. They should actively work to minimize or eliminate bias in the data they use to train AI.
The AI systems used by insurers will also need to be built and trained to provide clear explanations for the actions they take. This will be part of regulatory compliance, but more importantly, decisions made by inscrutable algorithms can harm an insurer’s brand, cause mistrust and even give rise to litigation. Transparency and “explainability” are crucial to mitigating this risk.
As AI systems continue to mature and find new uses across insurance, insurers will need to reckon with the fact that their AIs will represent them with every action they take. This will raise new questions for the industry. For example, how can life insurers make responsible use of a growing selection of health data—related to fitness, biometrics, and even genetics—for automated decision making?
Such questions will loom large as a combination of big data and smarter AI allow insurers to better calculate risk on an individual basis.
Come back next week as I continue the series with a look at the second trend to the Tech Vision report: extended reality. Or, head here to read the full report yourself.